GithubHelp home page GithubHelp logo

Comments (13)

rbodo avatar rbodo commented on July 2, 2024

Conversion of Conv1D layers is not implemented yet, so the parsing function skips over them, resulting in a key error when trying to build the parsed model.

If you want to implement 1D-convolutions, you can have a look at how the other layer types are implemented in these modules here:

snntoolbox.simulation.backends.inisim.temporal_mean_rate_{theano, tensorflow}

from snn_toolbox.

Psyf avatar Psyf commented on July 2, 2024

Thank you for the prompt reply.
I tweaked the Conv1D to a Conv2D and it seems to be working with little reduction in accuracy.

However, there is a different error now.

Initializing INI simulator...

Loading data set from '.npz' files in C:\Users\Psyf\Desktop\git\EEG_Trials\dataset.

Parsing input model...
Skipping layer InputLayer.
Parsing layer Conv2D.
Using activation relu.
Using BatchNorm axis -1.
Absorbing batch-normalization parameters into parameters of previous Conv2D.
Skipping layer BatchNormalization.
Parsing layer Conv2D.
Using activation relu.
Using BatchNorm axis -1.
Absorbing batch-normalization parameters into parameters of previous Conv2D.
Skipping layer BatchNormalization.
Parsing layer Conv2D.
Using activation relu.
Using BatchNorm axis -1.
Absorbing batch-normalization parameters into parameters of previous Conv2D.
Skipping layer BatchNormalization.
Parsing layer Flatten.
Parsing layer Dense.
Using activation linear.
Skipping layer Dropout.
Parsing layer Dense.
Using activation softmax.

Building parsed model...

Compiling parsed model...

Normalizing parameters...
Using 23000 samples for normalization.
INFO: Need ['0.13', '0.04', '0.01', '0.00', '0.00'] GB for layer activations.
May have to reduce size of data set used for normalization.
Calculating activations of layer 00Conv2D_58x1x24 ...
Traceback (most recent call last):
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\conversion\utils.py", line 384, in try_reload_activations
    layer.name + '.npz'))['arr_0']
  File "c:\users\psyf\anaconda3\lib\site-packages\numpy\lib\npyio.py", line 370, in load
    fid = open(file, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Psyf\\Desktop\\git\\EEG_Trials\\log\\gui\\test\\normalization\\activations\\00Conv2D_58x1x24.npz'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "c:\users\psyf\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\users\psyf\anaconda3\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\Psyf\Anaconda3\Scripts\snntoolbox.exe\__main__.py", line 9, in <module>
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\bin\run.py", line 50, in main
    test_full(config)
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\bin\utils.py", line 95, in test_full
    normalize_parameters(parsed_model, config, **normset)
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\conversion\utils.py", line 97, in normalize_parameters
    batch_size, activ_dir)
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\conversion\utils.py", line 391, in try_reload_activations
    batch_size)
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\conversion\utils.py", line 339, in get_activations_layer
    return keras.models.Model(layer_in, layer_out).predict(x, batch_size)
  File "c:\users\psyf\anaconda3\lib\site-packages\keras\engine\training.py", line 1817, in predict
    check_batch_axis=False)
  File "c:\users\psyf\anaconda3\lib\site-packages\keras\engine\training.py", line 113, in _standardize_input_data
    'with shape ' + str(data_shape))
ValueError: Error when checking : expected input to have 4 dimensions, but got array with shape (23000, 178, 1)

This is my corresponding code for data storage. The data is 178 recordings at regular timeintervals, and I have 23000 samples:

labels = np.array(labels)
labels = np_utils.to_categorical(labels)
np.savez_compressed("y_test.npz", labels)
cleaned_eeg_data = np.array(cleaned_eeg_data)
cleaned_eeg_data = np.reshape(cleaned_eeg_data, (cleaned_eeg_data.shape[0], cleaned_eeg_data.shape[1], 1, 1))
np.savez_compressed("x_test.npz", cleaned_eeg_data)

from snn_toolbox.

rbodo avatar rbodo commented on July 2, 2024

Hi Saif,

When you changed the testset to fit the requirements of a 2D-convolution, did you also change the normalization data x_norm.npz in the same way? It looks like the normalization data set is still missing a dimension. You can simply copy the x_test.npz and rename it x_norm.npz to see if that was the issue.

from snn_toolbox.

Psyf avatar Psyf commented on July 2, 2024

Hi Bodo,

I made a copy of x_test.npz and renamed it to x_norm.npz. However, the error persists.
I'm guessing my reshaping of the array to fit the Conv2D may be a bit weird? What do you think?

from snn_toolbox.

rbodo avatar rbodo commented on July 2, 2024

The exact same error? That is strange. Can you print the shape of the norm data, for instance in line 339 of the file conversion/utils.py? Also, which "image_data_format" are you using in keras, "channels_first" or "channels_last"? See this link on where to find it: https://keras.io/backend/#kerasjson-details
The way you add the empty dimensions to your data, I think you should be using channels_last.

from snn_toolbox.

Psyf avatar Psyf commented on July 2, 2024
  • The same error.
  • I'm using channels_last
  • The shape of x on line 339 in (23000, 178, 1)

from snn_toolbox.

rbodo avatar rbodo commented on July 2, 2024

Hmm, I don't see any part in our pipeline where the normalization data is modified in any way. So my only guess is that the npz file on disk is still wrong for some reason. I'm sure you have checked, but can you confirm that the x_norm.npz has all four dimensions? Thanks

from snn_toolbox.

Psyf avatar Psyf commented on July 2, 2024

You are right. The x_test.npz and x_norm.npz are actually stored as (23000, 178, 1) in the file even though the data is (23000, 178, 1, 1) in the program.

cleaned_eeg_data = np.array(cleaned_eeg_data)
cleaned_eeg_data = np.reshape(cleaned_eeg_data, (cleaned_eeg_data.shape[0], cleaned_eeg_data.shape[1], 1, 1))
np.save("x_test.npz", cleaned_eeg_data)       # tried with save, savez and savez_compressed 
print(np.shape(cleaned_eeg_data))               # prints (23000, 178, 1, 1)       

Seems like some weird numpy problem (/optimization) or am I doing something very stupid? :/
Have you faced this issue with numpy before?

from snn_toolbox.

rbodo avatar rbodo commented on July 2, 2024

No, never seen this before. Very weird. You could experiment with np.newaxis or np.expand_dims, maybe that makes it persistent. Let me know if you find out more (on SO perhaps)!

from snn_toolbox.

Psyf avatar Psyf commented on July 2, 2024

Please disregard, I was being a complete idiot. I wasn't writing the files to dataset\x_test.npz so they were outdated from the Conv1D years. Sorry.

from snn_toolbox.

Psyf avatar Psyf commented on July 2, 2024

That being said, I ran into another error. My guess is that the activations are too low after Batch 2, hence triggering a divide by zero. Error log:

λ snntoolbox snntoolbox_config.txt -t
Using TensorFlow backend.
2018-10-06 00:40:49.541800: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-10-06 00:40:50.239193: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1356] Found device 0 with properties:
name: GeForce 940MX major: 5 minor: 0 memoryClockRate(GHz): 1.189
pciBusID: 0000:01:00.0
totalMemory: 2.00GiB freeMemory: 1.66GiB
2018-10-06 00:40:50.255877: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0
2018-10-06 00:40:50.954004: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-10-06 00:40:50.964208: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929]      0
2018-10-06 00:40:50.970532: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0:   N
2018-10-06 00:40:50.977038: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1430 MB memory) -> physical GPU (device: 0, name: GeForce 940MX, pci bus id: 0000:01:00.0, compute capability: 5.0)
Initializing INI simulator...

Loading scale factors from disk instead of recalculating.
Loading data set from '.npz' files in C:\Users\Psyf\Desktop\git\EEG_Trials\dataset.

Parsing input model...
Skipping layer InputLayer.
Parsing layer Conv2D.
Using activation relu.
Using BatchNorm axis -1.
Absorbing batch-normalization parameters into parameters of previous Conv2D.
Skipping layer BatchNormalization.
Parsing layer Conv2D.
Using activation relu.
Using BatchNorm axis -1.
Absorbing batch-normalization parameters into parameters of previous Conv2D.
Skipping layer BatchNormalization.
Parsing layer Conv2D.
Using activation relu.
Using BatchNorm axis -1.
Absorbing batch-normalization parameters into parameters of previous Conv2D.
Skipping layer BatchNormalization.
Parsing layer Flatten.
Parsing layer Dense.
Using activation linear.
Skipping layer Dropout.
Parsing layer Dense.
Using activation softmax.

Building parsed model...

Compiling parsed model...

Normalizing parameters...
Using scale factor 1.00 for softmax layer.
Plotting distributions of weights and activations before and after normalizing...
Loading activations stored during a previous run.
Loading activations stored during a previous run.
Loading activations stored during a previous run.
Loading activations stored during a previous run.
Loading activations stored during a previous run.

Building spiking model...
Building layer: 00Conv2D_58x1x24
Building layer: 01Conv2D_28x1x16
Building layer: 02Conv2D_13x1x8
Building layer: 03Flatten_104
Building layer: 04Dense_20
Building layer: 05Dense_2
Compiling spiking model...

Detected layer with biases: 00Conv2D_58x1x24
Detected layer with biases: 01Conv2D_28x1x16
Detected layer with biases: 02Conv2D_13x1x8
Detected layer with biases: 04Dense_20
Detected layer with biases: 05Dense_2
Number of operations of ANN: 94622
Number of neurons: 1966
Number of synapses: 4138

Saving model to C:\Users\Psyf\Desktop\git\EEG_Trials\1-P-CNN_INI.h5...


Starting new simulation...

Current accuracy of batch:
100.00%_100.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_100.00%_100.00%_100.00%_
Batch 1 of 10 completed (10.0%)
Moving accuracy of SNN (top-1, top-1): 100.00%, 100.00%.
Moving accuracy of ANN (top-1, top-1): 100.00%, 100.00%.

Calculating activations...

Average spike rate: 0.037515547263681594 spikes per simulation time step.
Saving plots of one sample to C:\Users\Psyf\Desktop\git\EEG_Trials\log\gui\test...

Plotting layer 00Conv2D_58x1x24
Plotting layer 01Conv2D_28x1x16
Plotting layer 02Conv2D_13x1x8
Plotting layer 04Dense_20
Plotting layer 05Dense_2
Plotting batch run statistics...
Done.


Starting new simulation...

Current accuracy of batch:
0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_0.00%_100.00%_100.00%_100.00%_100.00%_100.00%_100.00%_100.00%_100.00%_100.00%_100.00%_100.00%_100.00%_100.00%_100.00%_100.00%_100.00%_100.00%_100.00%_
Batch 2 of 10 completed (20.0%)
Moving accuracy of SNN (top-1, top-1): 100.00%, 100.00%.
Moving accuracy of ANN (top-1, top-1): 100.00%, 100.00%.

Calculating activations...

Average spike rate: 0.0018501243781094527 spikes per simulation time step.
Saving plots of one sample to C:\Users\Psyf\Desktop\git\EEG_Trials\log\gui\test...

Plotting layer 00Conv2D_58x1x24
Plotting layer 01Conv2D_28x1x16
c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\simulation\plotting.py:373: RuntimeWarning: invalid value encountered in true_divide
  activations_norm = activations / np.max(activations)
Traceback (most recent call last):
  File "c:\users\psyf\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\users\psyf\anaconda3\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\Psyf\Anaconda3\Scripts\snntoolbox.exe\__main__.py", line 9, in <module>
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\bin\run.py", line 50, in main
    test_full(config)
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\bin\utils.py", line 129, in test_full
    results = run(spiking_model, **testset)
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\bin\utils.py", line 202, in wrapper
    results.append(run_single(snn, **testset))
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\bin\utils.py", line 126, in run
    return snn.run(**test_set)
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\simulation\utils.py", line 722, in run
    self.data_format)
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\simulation\plotting.py", line 88, in output_graphs
    plot_layer_summaries(plot_vars, config, path, data_format)
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\simulation\plotting.py", line 189, in plot_layer_summaries
    name, newpath, data_format)
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\simulation\plotting.py", line 377, in plot_activations_minus_rates
    str('Activations_minus_Spikerates'), path, (-1, 1), data_format)
  File "c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\simulation\plotting.py", line 305, in plot_layer_activity
    plt.savefig(os.path.join(path, filename), bbox_inches='tight')
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\pyplot.py", line 697, in savefig
    res = fig.savefig(*args, **kwargs)
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\figure.py", line 1814, in savefig
    self.canvas.print_figure(fname, **kwargs)
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\backends\backend_qt5agg.py", line 172, in print_figure
    super(FigureCanvasQTAggBase, self).print_figure(*args, **kwargs)
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\backend_bases.py", line 2208, in print_figure
    **kwargs)
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py", line 507, in print_png
    FigureCanvasAgg.draw(self)
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py", line 430, in draw
    self.figure.draw(self.renderer)
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\artist.py", line 55, in draw_wrapper
    return draw(artist, renderer, *args, **kwargs)
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\figure.py", line 1295, in draw
    renderer, self, artists, self.suppressComposite)
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\image.py", line 138, in _draw_list_compositing_images
    a.draw(renderer)
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\artist.py", line 55, in draw_wrapper
    return draw(artist, renderer, *args, **kwargs)
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\axes\_base.py", line 2399, in draw
    mimage._draw_list_compositing_images(renderer, self, artists)
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\image.py", line 138, in _draw_list_compositing_images
    a.draw(renderer)
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\artist.py", line 55, in draw_wrapper
    return draw(artist, renderer, *args, **kwargs)
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\image.py", line 548, in draw
    renderer, renderer.get_image_magnification())
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\image.py", line 774, in make_image
    unsampled=unsampled)
  File "c:\users\psyf\anaconda3\lib\site-packages\matplotlib\image.py", line 370, in _make_image
    a_min = np.ma.min(A).astype(scaled_dtype)
  File "c:\users\psyf\anaconda3\lib\site-packages\numpy\ma\core.py", line 3205, in astype
    if self._fill_value is not None:
AttributeError: 'MaskedConstant' object has no attribute '_fill_value'

I can suppress the error by writing: np.seterr(divide='ignore', invalid='ignore') after np import in c:\users\psyf\anaconda3\lib\site-packages\snntoolbox\simulation\plotting.py. It does nothing but remove the first 2 lines of the error.

Is my guess correct, and how would you work around it?

Thanking you,
Saif

from snn_toolbox.

rbodo avatar rbodo commented on July 2, 2024

OK ... Did you look at the plots? The activation plots should have been created fine right before the error. If the activations are all zero for a layer, then that's indeed the issue. The solution will be to add a test against zero activations in line 373 of plotting.py. I do such a test for the spikerates in the line below, just need to do the same for activations then. Let me know if that's it.

from snn_toolbox.

Psyf avatar Psyf commented on July 2, 2024

That's it 👍
The simulation runs without errors, though I do get a few warnings from conda in the later batches.

Plotting batch run statistics...
c:\users\psyf\anaconda3\lib\site-packages\numpy\lib\function_base.py:3159: RuntimeWarning: Degrees of freedom <= 0 for slice
  c = cov(x, y, rowvar)
c:\users\psyf\anaconda3\lib\site-packages\numpy\lib\function_base.py:3093: RuntimeWarning: divide by zero encountered in double_scalars
  c *= 1. / np.float64(fact)
c:\users\psyf\anaconda3\lib\site-packages\numpy\lib\function_base.py:3093: RuntimeWarning: invalid value encountered in multiply
  c *= 1. / np.float64(fact)

The accuracy of the SNN seems to be around 67%. I shall explore SNNs more to understand what the output plots are telling me.

Thanks for all the help @rbodo

from snn_toolbox.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.