GithubHelp home page GithubHelp logo

Comments (63)

piewchee avatar piewchee commented on July 20, 2024 2

Hi Bodo,

The Spinnaker toolbox finally work. I run 20 test samples and 3 were wrong, hence acc=85%.
I changed the V_thres =0.75

Apparently Spinnaker SpikeSourcePoisson() has some issue;
in NEST, neuron.rate will set the rate for SpikeSourcePoisson, and the rate is updated whenever there is a new test digit.

However , Spinnaker keep repeating the same first test digit neuron.rate.

Spinnaker guy advised me to use layer.set(rate=..) instead; so I changed the self._poisson_input:
rates= kwargs[str('x_b_l')].flatten()
spiketrains = []
spiketrains = list(rates*500) # I reduced the scaling for spinnaker)
self.layers[0].set(rate=spiketrains)

Rgds
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

Hi Del,

Thanks for your interest in our toolbox! Sadly, the toolbox is not yet interfaced directly with SpiNNaker.

First you need to install one of the pyNN simulators, preferably nest (faster than brian in my experience).

Second, set simulator = nest. Your previous error message should not appear now.

As part of the output, the toolbox then provides you with an SNN model in pyNN format which you can use for SpiNNaker. Specifically, after building the SNN, the toolbox saves a text file for each layer containing the connection source- and target-indices and the corresponding weight and delay.

Hope this helps.

Best,

Bodo

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

Hi Bodo,

Thanks for the reply.
When I use nest as simulator for Lenet5/Keras.
the moving accuray for ANN was 100%, 100%
whereas SNN was 0%, 11.1%
the SNN total accuracy is 0% for 10 test examples.
May I know what is wrong?

When I run INI,
the SNN total accuracy is 80%.

Rgds
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

There are a few things to try here. (To give you more specific feedback, I'd need to see the config file.)

  • Increase the simulation duration. Compared to INI simulator, pyNN simulators should be given about 3 times as long to settle on stable firing rates.
  • Make sure parameter normalization is enabled in the config file.
  • Before trying to fix the SNN accuracy with pyNN simulators, you should get good accuracy with INI simulator first. 80% shows that something's wrong in the pipeline (maybe already during parsing of the input model, or because parameter normalization is disabled). Make sure that the accuracy of the model is about 98% after each stage of the pipeline (input model, parsed & normalized model, converted model run with INI sim). Then switch to Nest simulator.
  • Check the output plots. The SNN spikerates should correspond closely to the ANN activations in each feature map. Make sure the spikerates are not too low overall, or on the contrary, saturated. If they are, you can play with the parameter normalization.

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

There are a few things to try here. (To give you more specific feedback, I'd need to see the config file.)

* Increase the simulation duration. Compared to INI simulator, pyNN simulators should be given about 3 times as long to settle on stable firing rates.

* Make sure parameter normalization is enabled in the config file.

* Before trying to fix the SNN accuracy with pyNN simulators, you should get good accuracy with INI simulator first. 80% shows that something's wrong in the pipeline (maybe already during parsing of the input model, or because parameter normalization is disabled). Make sure that the accuracy of the model is about 98% after each stage of the pipeline (input model, parsed & normalized model, converted model run with INI sim). Then switch to Nest simulator.

* Check the output plots. The SNN spikerates should correspond closely to the ANN activations in each feature map. Make sure the spikerates are not too low overall, or on the contrary, saturated. If they are, you can play with the parameter normalization.

Hi Bodo,

I did not change the config file parameters in keras, which I git clone.
In the config.defaults file, online normalisation and schedule are false.
the keras config. file batch_size = 1, increase this will help?

Rgds
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

OK. You don't have to worry about online normalization or normalization schedule. If you did not change anything, parameter normalization should be enabled by default. It's the setting

[tools]
normalize = True

The batch size will not make a difference because inference on a batch of samples in parallel is only implemented for INI simulator.

I will run the example later today to see if I can reproduce the behavior.

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

Did you try with a longer simulation time (e.g. 100 time steps)? This should give you good accuracy on the LeNet example with Nest simulator. (Even though the ANN accuracy is still low - which is due to the model being trained with an old Keras version that seems to be incompatible with the latest one used in the toolbox.)

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

If you are going to use a new keras model (other than the example provided with the toolbox), see #25

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

Did you try with a longer simulation time (e.g. 100 time steps)? This should give you good accuracy on the LeNet example with Nest simulator. (Even though the ANN accuracy is still low - which is due to the model being trained with an old Keras version that seems to be incompatible with the latest one used in the toolbox.)

Hi Bodo,

After I changed the simulation time to 100.
INI attained 100% for both ANN and SNN.
total accuracy = 100%
nest attained 100% ANN 90% SNN.
total accuracy=90%

Rgds
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

ah, with which keras version?

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

ah, with which keras version?

I am using 2.1.6 followed your recommendation.

By the way, I am modifying the scripts to try out interfacing with spinnaker board.
Do I change the scripts (target simulator, config.defaults etc.) in the snn_toolbox in my working directory
or the snn_toolbox in python2.7/site_packages directory?
It seems the simulator is reading the scripts from python2.7/site_packages.

Rgds
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

I am using 2.1.6 followed your recommendation.

OK, that explains the high ANN accuracy.

By the way, I am modifying the scripts to try out interfacing with spinnaker board.
Do I change the scripts (target simulator, config.defaults etc.) in the snn_toolbox in my working directory
or the snn_toolbox in python2.7/site_packages directory?
It seems the simulator is reading the scripts from python2.7/site_packages.

I guess that depends on how you installed the toolbox...
I would clone or fork the repo and install with python setup.py develop or pip install -e .. Then you can just edit the cloned repo files and the changes will take effect also in the site_packages directory.

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

I am using 2.1.6 followed your recommendation.

OK, that explains the high ANN accuracy.

By the way, I am modifying the scripts to try out interfacing with spinnaker board.
Do I change the scripts (target simulator, config.defaults etc.) in the snn_toolbox in my working directory
or the snn_toolbox in python2.7/site_packages directory?
It seems the simulator is reading the scripts from python2.7/site_packages.

I guess that depends on how you installed the toolbox...
I would clone or fork the repo and install with python setup.py develop or pip install -e .. Then you can just edit the cloned repo files and the changes will take effect also in the site_packages directory.

Alrite Thanks!!
Will try and see.

Rgds
del

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

I did a minor modification to the scripts and tried to run the snntoolbox using simulator=spiNNaker.
It returns the error below: not sure what is the problem.
Is it trying to read the spiNNaker projection label to form the pathname?
rgds
Del

Detected layer with biases: 00Conv2D_6x24x24
Detected layer with biases: 02Conv2D_16x8x8
Detected layer with biases: 04Conv2D_120x4x4
Detected layer with biases: 06Dense_84
Detected layer with biases: 07Dense_10
Number of operations of ANN: 2346734
Number of neurons: 7614
Number of synapses: 1397800

Saving model to /media/mikat3/92d61e88-74ff-49dd-8e8b-5f1ea9ec3eb7/home/mikat2/snn_toolbox/examples/models/lenet5/keras...
Saving assembly...
Saving connections...
Traceback (most recent call last):
File "/usr/local/bin/snntoolbox", line 9, in
load_entry_point('snntoolbox', 'console_scripts', 'snntoolbox')()
File "/media/mikat3/92d61e88-74ff-49dd-8e8b-5f1ea9ec3eb7/home/mikat2/snn_toolbox/snntoolbox/bin/run.py", line 50, in main
test_full(config)
File "/media/mikat3/92d61e88-74ff-49dd-8e8b-5f1ea9ec3eb7/home/mikat2/snn_toolbox/snntoolbox/bin/utils.py", line 116, in test_full
config.get('paths', 'filename_snn'))
File "/media/mikat3/92d61e88-74ff-49dd-8e8b-5f1ea9ec3eb7/home/mikat2/snn_toolbox/snntoolbox/simulation/target_simulators/pyNN_target_sim.py", line 174, in save
self.save_connections(path)
File "/media/mikat3/92d61e88-74ff-49dd-8e8b-5f1ea9ec3eb7/home/mikat2/snn_toolbox/snntoolbox/simulation/target_simulators/pyNN_target_sim.py", line 356, in save_connections
filepath = os.path.join(path, projection.label.partition('→')[-1])
AttributeError: 'Projection' object has no attribute 'label'

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

Yes, in, pyNN, each projection has a label, typically something like source_population_name->target_population_name. In line 356, I extract the part of this label behind the arrow and use it as filename of the stored connections. It seems your new SpiNNaker projection object does not have such a label. That's an easy fix though, either find the equivalent attribute of your object, or invent some label for each connection to store.

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

hi Bodo,
Wouldn't it be more appropriate if spiNNaker_target_sim follow the same naming convention? cause the population connection will load the weight, delay info from the according to the name.
If I change the label in different format, will that mess thing up?
Rgds
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

You are right. I just don't know anything about your SpiNNaker projection object. Is that something you created yourself? The easiest would be to just give it the label attribute and set it with the target_population_name as in the pyNN simulators.

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

I didn't change much thing in your scripts. As pyNN_target_sim.py is very similar to pyNN.spinnaker; hence I just copy and rename it to spinnaker_target_sim.py, modified the utils.py and config.defaults's restrictions.
I find it strange and still trying to figure out why your
"filepath = os.path.join(path, projection.label.partition('->')[-1]) works
ie. it can find the projection label.
I checked your connections function, the projection did not have any label declared (pyNN will auto-generate label if not declared).
I mean why does it not complain that projection has no label attribute and is able to rename it to the 'target population name' using ('->')[-1].
your assembly(population) function do have label named after the layer.
Sorry I am not a software guy, more of IC designer, so not very familiar with scripting

Rgds
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

I checked your connections function, the projection did not have any label declared (pyNN will auto-generate label if not declared).

Exactly, this is what's happening: I never set the label explicitly, so pyNN autogenerates the label to source_population_name->target_population_name. It seems the pyNN.spinnaker module does not autogenerate a label, hence the error.

I mean why does it not complain that projection has no label attribute and is able to rename it to the 'target population name' using ('->')[-1].

It does not complain because the label is autogenerated. I do not rename the label. I read the label and extract the second part (removing the arrow because it's not a standard character) and use this part to name my file where I save the connections.

You could create the filename yourself, with something like:

for i, projection in enumerate(self.connections):
    filename = projection.label.partition('→')[-1] if hasattr(projection, 'label') else 'layer_' + str(i)
    filepath = os.path.join(path, filename)
    ...

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

Hi Bodo,
The cell(neuron) parameters that you set in config.defaults : eg v_thresh=1mv, v_reset=0 etc
won't change throughout the inference.
May I know how do you determine what values to set?

Rgds
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

v_thresh is set to 1 mV so that the normalized network can fire at most one spike per 1 ms time step. You could set the threshold differently and adapt the normalization accordingly.
v_reset is zero for simplicity and because it corresponds to the way the original ANN runs. If you change the reset value, you are essentially introducing a global bias towards higher or lower firing rates.
Not all of the other parameters may actually be used by the neuron model. Those that are were determined experimentally such that the spike rates fit most closely the activations of the original ANN. We just want a simple integrate-and-fire neuron with instantaneous integration and without any leak or delay.

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

v_thresh is set to 1 mV so that the normalized network can fire at most one spike per 1 ms time step. You could set the threshold differently and adapt the normalization accordingly.
v_reset is zero for simplicity and because it corresponds to the way the original ANN runs. If you change the reset value, you are essentially introducing a global bias towards higher or lower firing rates.
Not all of the other parameters may actually be used by the neuron model. Those that are were determined experimentally such that the spike rates fit most closely the activations of the original ANN. We just want a simple integrate-and-fire neuron with instantaneous integration and without any leak or delay.

Hi Bodo,
After sometimes, we finnaly managed to run the snntoolbox using Spinnaker as simulator.
However, I am unable to get the accuracy correctly.
The neuron parameters are exactly the same as using "nest" simulator and Vthr = 1mV.
I run the Lenet5/keras model (mnist) using "nest" first to get all the layers synaptic weight parameters.
After that I run using "spinnaker" with the synaptic weights previous saved using "nest".

However, I noticed that the number of spikes for each layers are not as similar to the "nest" output.
some of the test sample output has no spike output at all.
eg. the truth_d and guessed_d for 10 samples as follow:
truth_d =[7,2,1,0,4,1,4,9,5, 9]
guessed_d = [-1, -1, 0, 0, -1, 0, 0, 0, 0 , -1]

Not sure if you are familiar with Spinnaker, is there any significant difference between nest neuron and spinnaker?

Rgds
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

Hi Del,

Congrats on getting the network to run on Spinnaker.

If the problem is low output rates, did you try the following?

  • Make sure there is some output at all (this seems to be the case here judging from some of the samples being assigned class 0 instead of -1 which corresponds to no activity)
  • Use longer simulation times and/or finer time resolution to give the neurons time to integrate enough input for producing a spike.
  • Alternatively, lower the threshold. Also, play with other neuron parameters, e.g. make sure the leak is turned off.
  • Use artificial input with extreme parameter settings to enforce some output, for instance a completely white image (all pixels 255), very low threshold, very long simulation time.
  • Inspect the membrane potentials of hidden layer neurons to see that they behave as expected.
  • Try with fewer layers first and add them one by one to pinpoint the problem.

Good luck!

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

Hi Bodo,

* Use longer simulation times and/or finer time resolution to give the neurons time to integrate enough input for producing a spike.

I have already tried increasing the simulation times to 200ms or more (or lower the Vthres). However the
spinnaker will face back pressure congestion as there are too many spikes occur at the same times, hence many packets are being dropped.

* Alternatively, lower the threshold. Also, play with other neuron parameters, e.g. make sure the leak is turned off.

I tried this too, i lowered the threshold to 0.85, i looked at the total_spikes_activity.png plot, I noticed the number of spikes is higher than when using "nest" simulator, if I increase it to 0.9 it will be much lower
than what "nest" produced.
Hence the layers are able to generate spikes, just that the guessed output is wrong.
In "config_default" file, "leak=False", so leak is already turn off?

Will try the following methods:

* Use artificial input with extreme parameter settings to enforce some output, for instance a completely white image (all pixels 255), very low threshold, very long simulation time.

* Inspect the membrane potentials of hidden layer neurons to see that they behave as expected.

* Try with fewer layers first and add them one by one to pinpoint the problem.

Rgds
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

In "config_default" file, "leak=False", so leak is already turn off?

I meant turning off the leak on Spinnaker side. That would mean choosing a very long time constant for the membrane potential decay. The corresponding parameter in the nest simulator is tau_m, which I set to 1000.

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

In "config_default" file, "leak=False", so leak is already turn off?

I meant turning off the leak on Spinnaker side. That would mean choosing a very long time constant for the membrane potential decay. The corresponding parameter in the nest simulator is tau_m, which I set to 1000.

Hi Bodo,

Spinnaker is using the same neuron parameters, hence tau_m=1000 is used.

I ran a simple 3 neurons in a loop connection(synfire chain) with SpikeSourceArray used as stimulus
connected to neuron(0).
Attached figure 3 and figure 4 are the results from Nest and Spinnaker outputs. The neuron behaviours
look similar, except that NEST generates the spiketrain one time step after the neuron voltage potential
hit Vthreshold, whereas Spinnaker generate the spiketrain at the same timespot when Vmem hit Vthre

I replaced the stimulus to SpikeSourcePoisson for the synfire chain with poisson rate=50, as shown in Fig nest50 and spin50 (for nest and spinnaker simulators). I understand that it's a random generator. however the nest simulator generates much lesser spikes than Spinnaker consistently for a number of simulation runs.

If I am not wrong, the poisson rate for the input layer is scaled to 100 max?
neuron.rate = rates[neuron_idx] / self.rescale_fac * 1000
Could that be the reason I am unable to get the test accuracy for spinnaker compared with NEST?
figure_3
figure_4
nest50
spin50

Rgds
Del

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

Another observation when I use SpikeSourcePoisson as stimulus.
For NEST simulator, the synfire chain is consistently output same voltage potentials and spikes for any number of simulations i run (ie the output plot nest50.png
is exactly the same no matter how many times I run as long as the poisson rate is same).
For Spinnaker, the output plot is different for each simulation run.
In conclusion: Spinnaker PoissonSource generates more spikes and each run produces different number of spikes.

Rgds
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

For NEST simulator, the synfire chain is consistently output same voltage potentials and spikes for any number of simulations i run (ie the output plot nest50.png
is exactly the same no matter how many times I run as long as the poisson rate is same).
For Spinnaker, the output plot is different for each simulation run.

I guess the reason for this is that we fix the random seed for reproducibility when using NEST as simulator backend.

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

If I am not wrong, the poisson rate for the input layer is scaled to 100 max?
neuron.rate = rates[neuron_idx] / self.rescale_fac * 1000

The Poisson rate can be set with the SNN toolbox config file:

[input]
input_rate = 1000

If you set the input_rate parameter to 1000, then an input image pixel with maximum value (1 or 255) will fire on average at every time step. Lowering input_rate lowers the Poisson spike rate. So you can tune this parameter to try and get the same behavior between NEST and Spinnaker.

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

Yes, absolutely, would be interested to learn the reason for this discrepancy.

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

Yes, absolutely, would be interested to learn the reason for this discrepancy.

Hi Bodo,
We are still struggling to solve a few bugs in Spinnaker connections.
Just a quick question. I understand that after running nest simulator, the toolbox will save the connection
info for each layer in file names with 00Con. 01Maxpool etc.
Initially I thought when running the SNN it will build the layer projection via fromfileconnector.
It seems that all the weights info are reading from 98.96.H5 file?

In the spynnaker_target_sim.py. we commented out "save connection", and deleted all the layer projection
file (00Con.. 01Max...) in the keras directorty, but the simulator still able to obtain neuron and weight info for each layer to build the SNN.

Rgds
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

Yes, that's because it does the whole conversion from scratch, using the Keras .h5 model. In some cases (e.g. INI simulator) it is possible to just start with loading the converted SNN, but if I remember correctly that was not feasible for pyNN simulators, so the toolbox runs the whole pipeline from the beginning.

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

Congratulations on your progress!

Not sure I understand your question / how you generate the poisson rates from NEST. If you stored the original MNIST digits in an npz file, you should be able to simply save a version of this file with the order of the digits permuted. Or just permute the data in python after loading it from disk and before feeding it to your spike generator. Or maybe I'm missing something?

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

Ah, thanks for clarifying.

Yes, you could do as you suggested. Another option is to use the parameter

[simulation]

sample_idxs_to_test = [0, 5, 2123]

in the config file.

This allows you to specify which sample of the testset you want to run (0, 5, 2123 in the example). I hope this still works for NEST, haven't used it in a long time.

But anyways, the toolbox saves the spiketrains of hidden layers for every sample, if you specify in the config file

[output]
log_vars = {'spiketrains_n_b_l_t', 'input_b_l_t', ...}

The data is stored in the directory

[paths]
log_dir_of_current_run =

or a subfolder of the working directory if the path above is not specified.

If I remember correctly, the spiketrains_n_b_l_t variable does not include the Poisson input spiketrains, but you should be able to easily add that to the existing logging behavior.

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

Hi Bodo,
I am thinking of trying out other models, eg. cifar10, on Spinnaker
However when I look into the config file in binarynet , the simulator option is not stated in there, ie it is
using config.defaults simulator which is INI.
May I know if I can include "simulator=nest, etc" in the config file so that SNN can be simulated
using nest, spynnaker etc.

Do I need to install Theano as well? when I look into Theano 1.0, it seems that it requires older version
of Numpy and Scipy eg. (NumPy >= 1.9.1 <= 1.12) whereas the Numpy I have is 1.15.4

Rgds
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

BinaryNet has binary weights and binary activations. Binary weights would not be a problem because the toolbox ensures the weights are binarized during parsing of the input model, if you specify it in the config file. There is nothing in the nest simulator that ensures binary activations, so I don't know whether it will come out all right. You can try of course. But you should be aware that the BinaryNet is rather large, so it may take a while to run.

An alternative is to use a standard CNN architecture for CIFAR10; keras has a few examples.

https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py

You may want to train it with Averagepooling and without biases because they are not implemented in Nest.

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

H Bodo,

I have re-training cifar10_cnn.py disabling the bias in CNNlayer (use_bias=False) and averagepooling.

I ran the nest simulator, the toolbox killed the process while saving layer 01conv. not sure what happened. I set normalized = True, and binarized_weights= False.
(i changed the dataset path to "datasets/binaryconnect")

Anyway since saving the layer connections has no effect on the simulation, i disabled it.
I encountered another problem after that, the poisson generator encountered negative rate.
Am I using the right dataset? binaryconnect dataset has normalised data, but why is there negative
values?

Best Regards
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

If I remember correctly, the dataset was preprocessed by the authors of BinaryConnect (whitened, mean-subtracted etc), which would explain the negative values.

In the training script, you can save your own version of the cifar10 dataset without mean-subtraction and use it in the simulator.

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

Hi Del,

since you put so much effort into getting the spinnaker target sim to run, would you be willing to push your script to the toolbox repo? I am sure others (me included) would greatly appreciate it.

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

In the config file. If I set under [tools]- normalise=True. Then the dataset will require to have x_norm.npz and x_test.npz. If normalise=False, then only x_test.npz is required.

Correct.

If I am not wrong the normalisation is for weight normalisation.

Correct.

What setting do I need to change if I run Keras model training to obtain the .h5 file.

I would not use the toolbox (i.e. the config file) to train a Keras model. Instead you can start with the script

snn_toolbox/scripts/ann_architectures/cifar10/cnn_BN.py

or use some of the others in that folder.

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

Sure. I need to tidy up the script abit. There are many debug lines in the script need to be cleaned up. Let me try to settle the cifar10 problems first.

That's great! (By the way, the script does not need to be perfect since it is modular and should not affect other parts of the toolbox, so we can keep working on it without fear of breaking things.)

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

x_norm.npz is just a subset or a copy of the dataset. If the dataset is as small as cifar10, you can simply copy x_test.npz.

To rescale the dataset, just divide by 255; there is no need to use the ImageDataGenerator except you want more sophisticated preprocessing or your dataset does not fit into memory.

There is an example of how to save the dataset for the toolbox here:

snn_toolbox/scripts/dataset_io/load/cifar10.py

(You can turn off global contrast normalization though.)

from snn_toolbox.

delpie avatar delpie commented on July 20, 2024

Hi Bodo,

Thanks! I used the cifar10.py to save the x_test.npz etc
It has the same error that I encountered when I using my own script to save the x_test, y_test.npz
the error is as follow:
File "/media/mikat3/92d61e88-74ff-49dd-8e8b-5f1ea9ec3eb7/home/mikat2/snn_toolbox/snntoolbox/simulation/utils.py", line 1069, in reshape_flattened_spiketrains
spiketrains_flat[k, int(t / self._dt)] = t
IndexError: index 784 is out of bounds for axis 0 with size 784

Rgds
Del

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

Hi Del,

Please check your input dimensions in the model definition. The cifar10 images are of shape (32, 32, 3). MNIST has np.prod((28, 28, 1)) = 784, so my guess is that you still have the MNIST input dimension in your network.

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

What exactly do you mean with "access"? If you want to upload something (issuing a pull request), you have to do this from a fork of the repository (because snntoolbox is part of an organization account, so only members of the organization can make pull requests from a clone of the repo.) See also here:

https://help.github.com/en/articles/creating-a-pull-request-from-a-fork

So I guess you'd fork the toolbox, merge your changes into the fork, and then submit a pull request from the fork.

Or did you mean something else?

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

I see. Yes, that won't work. Please use a pull request as I described above.

(If it's too much trouble, you can also send me the file and I'll upload it since it's only the one file.)

from snn_toolbox.

piewchee avatar piewchee commented on July 20, 2024

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

Ah, sorry for the misunderstanding. Do you have my email? [email protected]

from snn_toolbox.

rbodo avatar rbodo commented on July 20, 2024

Del asked me to share the following modifications he had to do to run a model converted with the snntoolbox on SpiNNaker.

  1. In sPyNNaker/spynnaker/pyNN/models/pynn_population_common.py inside def get function, add the following statement in line 203:
if isinstance(parameter_names, unicode):
    parameter_names=str(parameter_names)
  1. In sPyNNaker/spynnaker/pyNN/models/neuron/builds/if_cond_exp_base.py, add
def describe(self):
    return “IF_cond_exp”
  1. In sPyNNaker/spynnaker/pyNN/models/spike_source/spike_source_poisson.py, add
def describe(self):
    return “SpikeSourcePoisson”
  1. In SpiNNUtils/spin_utilities/ranged/abstract_dict.py, inside def __contains__ function, add in line 221:
if isinstance(key, unicode):
    key = str(key)
  1. In sPyNNaker8/spynnaker8/models/projection.py, give the data argument the default value data=None in
def _save_callback(self, save_file, format, metadata, data):

from snn_toolbox.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.