GithubHelp home page GithubHelp logo

dxymiemiemie / snn_n-omniglot Goto Github PK

View Code? Open in Web Editor NEW

This project forked from brain-cog-lab/n-omniglot

0.0 0.0 0.0 689 KB

Preprocessing and baseline for N-Omniglot. Details can be found at https://arxiv.org/abs/2112.13230.

Python 100.00%

snn_n-omniglot's Introduction

N-Omniglot

[Paper] || [Dataset]

N-Omniglot is a large neuromorphic few-shot learning dataset. It reconstructs strokes of Omniglot as videos and uses Davis346 to capture the writing of the characters. The recordings can be displayed using DV software's playback function (https://inivation.gitlab.io/dv/dv-docs/docs/getting-started.html). N-Omniglot is sparse and has little similarity between frames. It can be used for event-driven pattern recognition, few-shot learning and stroke generation.

It is a neuromorphic event dataset composed of 1623 handwritten characters obtained by the neuromorphic camera Davis346. Each type of character contains handwritten samples of 20 different participants. The file structure and sample can be found in the corresponding PNG files in samples.

The raw data can be found on the https://doi.org/10.6084/m9.figshare.16821427.

Structure

filestruct_00.pngsample_00

How to use N-Omniglot

We also provide an interface to this dataset in data_loader so that users can easily access their own applications using Pytorch, Python 3 is recommended.

  • NOmniglot.py: basic dataset
  • nomniglot_full.py: get full train and test loader, for direct to SCNN
  • nomniglot_train_test.py: split train and test loader, for Siamese Net
  • nomniglot_nw_ks.py: change into n-way k-shot, for MAML
  • utils.py: some functions

As with DVS-Gesture, each N-Omniglot raw file contains 20 samples of event information. The NOmniglot class first splits N-Omniglot dataset into single sample and stores in the event_npy folder for long-term use (reference SpikingJelly). Later, the event data will be encoded into different event frames according to different parameters. The main parameters include frame number and data type. The event type is used to output the event frame of the operation OR, and the float type is used to output the firing rate of each pixel.

Before you run this code, some packages need to be ready:

pip install dv
pip install pandas
torch
torchvision >= 0.8.1
  • use nomniglot_full:

db_train = NOmniglotfull('./data/', train=True, frames_num=4, data_type='frequency', thread_num=16)
dataloadertrain = DataLoader(db_train, batch_size=16, shuffle=True, num_workers=16, pin_memory=True)
for x_spt, y_spt, x_qry, y_qry in dataloadertrain:
    print(x_spt.shape)
  • use nomniglot_pair:

data_type = 'frequency'
T = 4
trainSet = NOmniglotTrain(root='data/', use_frame=True, frames_num=T, data_type=data_type, use_npz=True, resize=105)
testSet = NOmniglotTest(root='data/', time=1000, way=5, shot=1, use_frame=True, frames_num=T, data_type=data_type, use_npz=True, resize=105)
trainLoader = DataLoader(trainSet, batch_size=48, shuffle=False, num_workers=4)
testLoader = DataLoader(testSet, batch_size=5 * 1, shuffle=False, num_workers=4)
for batch_id, (img1, img2) in enumerate(testLoader, 1):
    # img1.shape [batch, T, 2, H, W]
    print(batch_id)
    break

for batch_id, (img1, img2, label) in enumerate(trainLoader, 1):
    # img1.shape [batch, T, 2, H, W]
    print(batch_id)
    break
  • use nomniglot_nw_ks:

db_train = NOmniglotNWayKShot('./data/', n_way=5, k_shot=1, k_query=15,
                                  frames_num=4, data_type='frequency', train=True)
dataloadertrain = DataLoader(db_train, batch_size=16, shuffle=True, num_workers=16, pin_memory=True)
for x_spt, y_spt, x_qry, y_qry in dataloadertrain:
    print(x_spt.shape)
db_train.resampling()

Experiment

method

We provide four modified SNN-appropriate few-shot learning methods in examples to provide a benchmark for N-Omniglot dataset. Different way, shot, data_type, frames_num can be choose to run the experiments. You can run a method directly in the PyCharm environment

Reference

[1] Yang Li, Yiting Dong, Dongcheng Zhao, Yi Zeng. N-Omniglot: a Large-scale Neuromorphic Dataset for Spatio-temporal Sparse Few-shot Learning. figshare https://doi.org/10.6084/m9.figshare.16821427.v2 (2021).

[2] Li Y, Dong Y, Zhao D, et al. N-Omniglot, a large-scale neuromorphic dataset for spatio-temporal sparse few-shot learning[J]. Scientific Data, 2022, 9(1): 746.

snn_n-omniglot's People

Contributors

tianlonglee avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.