GithubHelp home page GithubHelp logo

nvs2graph's Introduction

PIX2NVS: A Neuromorphic Vision Sensors Emulator

We provide our implementation of PIX2NVS, a tool for converting pixel frames to brightness spike events as generated by neuromorphic vision sensors (e.g. DAVIS-240).

A full description of the operation of this tool can be found in our paper:

[1] Yin Bi, Yiannis Andreopoulos, ‘PIX2NVS: Parameterized Conversion of Pixel-domain Video Frames to Neuromorphic Vision Streams’ [C], IEEE International Conference on Image Processing (ICIP), Sept. 17-20, 2017, Beijing, China

If you use the code in this repository, please cite the paper above. This tool is provided under the [GNU General Public Use v3.0 license].

Building Source

All dependencies are located in the source folder. To build from source, change your working directory to this repository's home folder and run:

g++ -o pix2nvs src/*.cpp  # to build from source  
./pix2nvs # to run using default parameters

Running Options

Option Description
-r or --reference set to (1, 2, or 3) to specify one of the reference frame update methods.
-m or --maxevents will specify the maximum number of events generated between two frames
-b or --blocksize will set the block size for local inhibition
-a or --adaptive will set the co-efficient shift for adaptive thresholding, set to (0) to disable

The reference frame update option can be set to any of the following:

r Reference Frame Update Method
1 copy last frame from source
2 update reference frame using only generated events, with history decay
3 update reference frame using only generated events, without history decay

Example Usage

To extract events from videos located in the input folder:

./pix2nvs -r 3 -a 0 -b 4

All outputs will be located in the folder "Events". The output from PIX2VNS using these parameters is visualized in the following figure:

Pixel Image Emulated NVS Events
screen shot 2017-07-27 at 01 22 17 screen shot 2017-07-27 at 01 20 04

Contact

For any questions or bug reports, please contact Yin Bi at [email protected] or Alhabib Abbas at [email protected].

nvs2graph's People

Contributors

pix2nvs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

nvs2graph's Issues

ASL-DVS Train/Test splits

The article states that the test split of the proposed ASL-DVS dataset is built with 20% of the total data. Are those generated data splits public or could you provide more information on how they have been generated?

Thanks in advance.

Information of timestamp

Hi! Thanks for sharing the code! I notice that timestamp of events are not stored aftre data-preprocessing(while spatial positions are stored as 'pseudo' and polarity are stored as 'feature'). Do experiments show that timestamps are not so important or there is any other reason?

Method to achieve network inputs

Hi,

First of all, thank to you to provide such a fabulous method to solve event-based classification tasks. However, through your code, I did not find the code relates to transfer raw events to your input ( features, edge_index, pos et.al. Is there somewhere I missed or you have released this part of codes on other sites?

Best wishes,

Single sample inference

I have trained a 3-class classifier as described in your paper by using Train.py with batch_size=64. The trained model performs wonderfully on larger batches (acc > 90%), but accuracy drops significantly when performing inference on smaller batches. For batch_size=1 accuracy is ~50%.

Do you have any ideas as to what could be the culprit here? Ideally, I would like the model to perform well on single samples. Thanks!

AttributeError: 'tuple' object has no attribute 'view' when trying to run Test.py

Hi,
Thanks for making your code base available. I am quite new to pytorch and I am getting the following error on trying to run Test.py

/home/vi/Documents/NVS2Graph-master/code/Test.py:83: UserWarning: Anomaly Detection has been enabled. This mode will increase the runtime and should only be enabled for debugging.
with autograd.detect_anomaly():
tensor([2, 2, 5], device='cuda:0')
/home/vi/anaconda3/envs/env_for_torch/lib/python3.8/site-packages/torch_sparse/storage.py:382: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1603729096996/work/torch/csrc/utils/python_arg_parser.cpp:882.)
ptr = mask.nonzero().flatten()
Traceback (most recent call last):
File "/home/vi/Documents/NVS2Graph-master/code/Test.py", line 154, in
train(epoch, train_batch_logger, train_loader)
File "/home/vi/Documents/NVS2Graph-master/code/Test.py", line 87, in train
end_point = model(data)
File "/home/vi/anaconda3/envs/env_for_torch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/vi/Documents/NVS2Graph-master/code/Test.py", line 63, in forward
x = x.view(-1, self.fc1.weight.size(1))
AttributeError: 'tuple' object has no attribute 'view'

@PIX2NVS could you please help me in resolving this issue?

Thanks in advance!

DAVIS images?

Hello,
Thanks for this dataset!
The paper mentions that the sequences were recorded with a DAVIS240C sensor. Are the corresponding grayscale images available? I could not find them on the website.

Thanks,
Henri

data visualization

Hi, thanks for your work. I visualized part of the data. I think it seems to be the opposite. Can you give me some suggestions?
asl

How to select time windows of event data while training and testing?

Hey,

I recently followed the paper to generate the graph of each sample in dataset N-Caltech101 and training using the provided RGCNN network. However, only 57% accuracy I have achieved, which is far behind the reported results. I think the main problem might be caused by time windows selection. The paper said "randomly extract a single 30-millisecond time window of events", will this procedure occurred every time you generate a graph while training? Or just pre-compute graph for each event before the training begins?

Best wishes,

ty

Data Form

Hello,can you provide how to convert the dataset into [edge,feature,label,pseudo] form?

How to generate a graph?

Hello~ Thanks for your great work! I want to configure my own dataset and use your code for the prediction! Could you please show how to generate a graph with the raw events? Thanks so much.

How to generate the *.mat file ?

As I know, the original format of the data file is *.txt, in which each row represents a tuple (xi, yi, ti, pi).
In the inputsdata.py, I find the data ends with *.mat, which contains four attributes: feature,edge,pseudo,label.
But I can not find the code that generates these *.mat files. I think this step of code corresponds to the Nonuniform Sampling & Graph Construction section of the paper. Is it right ?
And how can I find these code ?

where to use the "normalized_cut_2d" function

def normalized_cut_2d(edge_index, pos):
    row, col = edge_index
    edge_attr = torch.norm(pos[row] - pos[col], p=2, dim=1)
    return normalized_cut(edge_index, edge_attr, num_nodes=pos.size(0))

But I do not find this function is used in the code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.