GithubHelp home page GithubHelp logo

shikhargupta / spiking-neural-network Goto Github PK

View Code? Open in Web Editor NEW
981.0 46.0 284.0 2.32 MB

Pure python implementation of SNN

License: Apache License 2.0

Python 100.00%
neuromorphic-hardware neuromorphic spiking-neural-networks neural-network spike-time-dependent-plasticity python synapse mnist-classification spike-trains

spiking-neural-network's Introduction

Spiking-Neural-Network

This is the python implementation of hardware efficient spiking neural network. It includes the modified learning and prediction rules which could be realised on hardware and are enegry efficient. Aim is to develop a network which could be used for on-chip learning as well as prediction.

Spike-Time Dependent Plasticity (STDP) algorithm will be used to train the network.

Network Elements

Assuming that we have learned the optimal weights of the network using the STDP algorithm (will be implemented next), this uses the weights to classify the input patterns into different classes. The simulator uses the 'winner-takes-all' strategy to supress the non firing neurons and produce distinguishable results. Steps involved while classifying the patterns are:

  • For each input neuron membrane potential is calculated in its receptive field (5x5 window).
  • Spike train is generated for each input neuron with spike frequency proportional to the membrane potential.
  • Foe each image, at each time step, potential of the neuron is updated according to the input spike and the weights associated.
  • First firing output neuron performs lateral inhibition on the rest of the output neurons.
  • Simulator checks for output spike.

Results

The simulator was tested upon binary classification. It can be extended upto any number of classes. The images for two classes are:

Each of the classes were presented to the network for 1000 time units each. The activity of the neurons was recorded. Here are the graphs of the potential of output neurons versus time unit.

First 1000 TU corresponds to class1, next 1000 to class2. Red line indicates the threshold potential.

The 1st output neuron is active for class1, 2nd is active for class2, and 3rd and 4th are mute for both the classes. Hence, by recording the total spikes in output neurons, we can determine the class to which the pattern belongs.

Further, to demonstrate the results for multi-class classification, the simulator was tested upon the following 6 images (MNIST dataset).

Each image represents a class and to each class a neuron is delegated. 2 neurons are assigned random weights. Here are the responses of each neuron to all the classes presented. X axis is the class number and Y axis is the number of spikes during each simulation. Red bar represents the class for which it spiked the most.

In the previous section we assumed that our network is trained i.e weights are learned using STDP and can be used to classify patterns. Here we'll see how STDP works and what all need to be taken care of while implementing this training algorithm.

Spike Time Dependent Plasticity

STDP is actually a biological process used by brain to modify it's neural connections (synapses). Since the unmatched learning efficiency of brain has been appreciated since decades, this rule was incorporated in ANNs to train a neural network. Moulding of weights is based on the following two rules -

  • Any synapse that contribute to the firing of a post-synaptic neuron should be made strong i.e it's value should be increased.
  • Synapses that don't contribute to the firing of a post-synaptic neuron should be dimished i.e it's value should be decreased.

Here is an explanation of how this algorithm works:

Consider the scenario depicted in this figure

Four neurons connect to a single neuron by synapse. Each pre synaptic neuron is firing at its own rate and the spikes are sent forward by the corresponding synapse. The intensity of spike translated to post synaptic neuron depends upon the strength of the connecting synapse. Now, because of the input spikes membrane potential of post synaptic neuron increases and sends out a spike after crossing the threshold. At the time when post synaptic neuron spikes, we'll monitor which all pre synaptic neurons helped it to fire. This could be done by observing which pre synaptic neurons sent out spikes before post synaptic neuron spiked. This way they helped in post synaptic spike by increasing the membrane potential and hence the corresponding synapse is strengthend. The factor by which the weight of synapse is increased is inversly proportional to the time difference between post synaptic and pre synaptic spikes given by this graph

Generative Property of SNN

This property of Spiking Neural Network is very useful in analysing training process. All the synapses connected to an output layer neuron, if scaled to proper values and rearranged in form of an image, depicts what pattern that neuron has learned and how disctinctly it can classify that pattern. For an example, after training a network with MNIST dataset if we scale the weights of all the snypases connected to a particular output neuron (784 in number) and form a 28x28 image with those scaled up weights we will get a grayscale pattern learned by that neuron. This property will be used later while demonstrating the results. This file contains the function that reconstructs image from weights.

Variable Threshold

In unsupervised learning it is very difficult to train a network where patterns have varied amount of activations (white pixels in case of MNIST). Patterns with higher activations tend to win in competetive learning and hence overshadow others (this problem will be demonstrated later). Therefore this method of normalization was introduced to bring them all down to same level. Threshold for each pattern is calculated based on the number of activation it contains. Higher the number of activations, higher is the threshold value. This file holds function to calculate threshold for each image.

Lateral Inhibition

In neurobiology, lateral inhibition is the capacity of an excited neuron to reduce the activity of its neighbors. Lateral inhibition disables the spreading of action potentials from excited neurons to neighboring neurons in the lateral direction. This creates a contrast in stimulation that allows increased sensory perception. This propoerty is also called as Winner-Takes-All (WTA). The neuron that gets excited first inhibits (lowers down the membrane potential) of other neurons in same layer.

Training for 3 class dataset

Here are the results after training an SNN using MNIST dataset with 3 classes (0-2) with 5 output neurons. We will leverage the generative property of SNN and reconstruct the images using trained weights connected to each output neuron to see how well the network has learned each pattern. Also, we see the membrane potential versus time plots for each output neuron to see how the training process was executes and made that neuron sensitive to a particular pattern only.

Neuron1

Neuron2

Neuron3

Neuron4

Here we can see clearly observe that Neuron 1 has learned pattern '1', Neuron 2 has learned '0', Neuron 3 is noise and Neuron 4 has learned '2'. Consider the plot of Neuron 1. In the beginning when the weights were randomly assigned it was firing for all the patterns. As the training proceeded, it became specific to pattern '1' only and was in inhibitory state for the rest. Onobserving Neuron 3 we can coclude that it reactsa to all the patterns and can be considered as noise. Hence, it is advisable to have 20% more output neurons than number of classes.

There is a slight overlapping of '2' and '0' which is a common problem in competetive learning. This can be eliminated proper fine tuning of parameters.

Improper training

If we don't use variable threshold for normalization, we will observe some patterns over shadowing others. Here is an example:

Here same threshold voltage was used for both the patterns and hence resulted in overlapping. This could sbe avoided by either choosing a dataset where each image has more or less same number of activations or normalizing the number of activations.

Parameters

Building a Spiking Neural Network from scratch not an easy job. There are several parameters that need to be tuned and taken care of. Combinations of so many parameters make it worse. Some of the major parameters that play an important role in the dynamics of network are -

  • Learning Rate
  • Threshold Potential
  • Weight Initialization
  • Number of Spikes Per Sample
  • Range of Weights

I have demonstrated how some of these parameters affect the network and how they should be handeled here under the heading Parameter Analysis.

Contributions

I was helped on this project by my collegue at Indian Institute of Technology, Guwahati - Arpan Vyas. He further went on to design an architecture of hardware accelerator for this Simplified SNN and deploy it on FPGA and hence reducing the training time considerably. Here is his Github profile.

spiking-neural-network's People

Contributors

arpanvyas avatar shikhargupta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spiking-neural-network's Issues

Repository structure

Hello @Shikhargupta ,

Thank you for the great README about the Spiking Neural Network functioning, but I don't really understand the structure of the repository. For example, I don't get why there are many different "neuron.py", spike_train.py", "recep_field.py", ecc... files in different folders and if they are all necessary.
Is it also possible to run the code out of the box?

Thank you very much,
Gianluca

classification

how can i use these spikes count in python that can classify images

Classification: recep_field.py

In the convolution part of the code the boundary for the window needs to be made 27 and not 15.
#Convolution
for i in range(28):
for j in range(28):
summ = 0
for m in ran:
for n in ran:
if (i+m)>=0 and (i+m)<=27 and (j+n)>=0 and (j+n)<=27:
summ = summ + w[ox+m][oy+n]*inp[i+m][j+n]/255
pot[i][j] = summ
return pot

where is "rf" in receptive_field?

hey man, I am running your code rescently. It's a wonderful work. However, I met a question and I need your help.
When i run the classify.py in classification file, i found the error "from receptive_field import rf" was wrong. I check up the receptive_field file, there is not a "rf" in there. So where is "rf" in receptive_field?
Looking forward to your reply.

ImportError :cannot import name 'reconst'

ImportError Traceback (most recent call last)
in ()
5 import cv2
6 from spike_train import encode
----> 7 from reconstruct import reconst
8 from weight_initialization import learned_weights_x
9 from weight_initialization import learned_weights_o

ImportError: cannot import name 'reconst'

accuracy

how to find the accuracy of classification results

A doubt about the snn network (on the neurodynamic explanation in the code)

Question:for j, x in enumerate(layer2):
active = []
if(x.t_rest<t):
x.P = x.P + np.dot(synapse[j], train[:,t])
if(x.P>par.Prest):
x.P -= par.D
active_pot[j] = x.P

   Dear researcher, bother you! 
   For the above intercepted code, I have a doubt and would like to communicate with you. 

For xP-= par.D . I still can't figure out the meaning of subtracting par.D. I checked the value of par.D, which is 0.75 in your document, and I looked up the relevant information, and I still don't know this clearly meaning.
At your convenience, would you please help me explain this problem?My email is [email protected]. Thank you very much and look forward to hearing from you!

IndexError: index 16 is out of bounds for axis 0 with size 16

IndexError Traceback (most recent call last)
in ()
52
53 #Convolving image with receptive field
---> 54 pot = rf(img)
55
56 #Generating spike train

# C:\Users\pc\AnacondaProjects\Spiking-Neural-Network-master\training\recep_field.py in rf(inp)
35 for n in ran:
36 if (i+m)>=0 and (i+m)<=par.pixel_x-1 and (j+n)>=0 and (j+n)<=par.pixel_x-1:
---> 37 summ = summ + w[ox+m][oy+n]*inp[i+m][j+n]/255
38 pot[i][j] = summ
39 return pot

IndexError: index 16 is out of bounds for axis 0 with size 16

this happens when i change the image from 1.png to 100.png or 101.png given below to
for k in range(par.epoch):
for i in range(322,323):
print (i," ",k)
img = cv2.imread("images/100.png", 0)

classify error

I have a different classification results for each training.(I made n = 10 to recognize digits)
here are two different training results
322 0
winner is 2
322 1
winner is 2
322 2
winner is 2
322 3
winner is 2
322 4
winner is 2
322 5
winner is 2
322 6
winner is 2
322 7
winner is 2
322 8
winner is 2
322 9
winner is 2

322 0
winner is 4
322 1
winner is 4
322 2
winner is 4
322 3
winner is 4
322 4
winner is 4
322 5
winner is 4
322 6
winner is 4
322 7
winner is 4
322 8
winner is 4
322 9
winner is 4

No module named numpy

Yo, I'm new to coding and for some reason it says no module named numpy, and unused statement import, but there's also a lot of marked warnings. How do I get this to work? I'm using pyCharm idk if I should use something else.

calculating firing rate proportional to the membrane potential

/encoding
freq = interp(pot[l][m], [-1.069,2.781], [1,20])
/multi_layer
freq = math.ceil(0.102*pot[l][m] + 52.02)
freq1 = math.ceil(200/freq)

Hi:
I just started learning SNN. I want to know these numbers' meaning and formula that you use to calculate firing rate.

how do I input bigger size image ?

how do I input a bigger size image?

I get an error when I put bigger size image :

IndexError Traceback (most recent call last)
in
27 for n in ran:
28 if (i+m)>=0 and (i+m)<=255 and (j+n)>=0 and (j+n)<=255:
---> 29 summ = summ + w[ox+m][oy+n]*img[i+m][j+n]
30 pot[i][j] = summ
31

IndexError: index 217 is out of bounds for axis 0 with size 217

'module' object is not callable

I'm new to python.
I'm getting this error when I run classify.py

TypeError Traceback (most recent call last)
in
29 # creating the hidden layer of neurons
30 for i in range(n):
---> 31 neuron(a)
32 layer2.append(a)
33

TypeError: 'module' object is not callable

please help me solve this error

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.