GithubHelp home page GithubHelp logo

cea-list / n2d2 Goto Github PK

View Code? Open in Web Editor NEW
143.0 15.0 35.0 477.93 MB

N2D2 is an open source CAD framework for Deep Neural Network simulation and full DNN-based applications building.

License: Other

CMake 0.52% C++ 16.61% Makefile 0.03% Shell 0.02% C 80.10% Cuda 1.06% Python 1.50% Dockerfile 0.01% Smarty 0.01% Tcl 0.07% Assembly 0.08% Cython 0.01%
deep-learning deep-learning-library deep-neural-networks spike-inference artificial-intelligence machine-learning neural-network

n2d2's People

Contributors

alexchariot avatar benoittain-cea avatar cmoineau avatar davidbriand-cea avatar e-dupuis avatar fabriceauz avatar gasparq avatar johannesthiele-cea avatar kucherinna avatar lbillod avatar maxence-naud avatar olivierbichler avatar olivierbichler-cea avatar thibaultallenet-cea avatar thibaut-cea avatar vtemplier avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

n2d2's Issues

Can't download kitti_road dataset

When I try to download the KITTI_road dataset with :
./tools/install_stimuli_gui.py
then I have selected KITTI_road dataset

And I get this error :

http://kitti.is.tue.mpg.de/kitti/data_road.zip
KITTI
Traceback (most recent call last):
File "./tools/install_stimuli_gui.py", line 140, in load_database
+ urllib.quote(fileName), target, progress)
File "/usr/lib64/python2.7/urllib.py", line 94, in urlretrieve
return _urlopener.retrieve(url, filename, reporthook, data)
File "/usr/lib64/python2.7/urllib.py", line 240, in retrieve
fp = self.open(url, data)
File "/usr/lib64/python2.7/urllib.py", line 208, in open
return getattr(self, name)(url)
File "/usr/lib64/python2.7/urllib.py", line 345, in open_http
h.endheaders(data)
File "/usr/lib64/python2.7/httplib.py", line 1013, in endheaders
self._send_output(message_body)
File "/usr/lib64/python2.7/httplib.py", line 864, in _send_output
self.send(msg)
File "/usr/lib64/python2.7/httplib.py", line 826, in send
self.connect()
File "/usr/lib64/python2.7/httplib.py", line 807, in connect
self.timeout, self.source_address)
File "/usr/lib64/python2.7/socket.py", line 553, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
IOError: [Errno socket error] [Errno -2] Name or service not known

Will N2D2 have a Python API in the future?

Python is a language that is gaining a lot of interest recently - most of the Neural Networks frameworks (e.g. Caffe and Tensorflow) support Python, which makes it a lot easier to use than C++

Are you planning on adding a Python API for N2D2 in the future?

Thanks for your answer

Building without CUDA?

Hi
I'm trying to build N2D2 in CPU-only mode but can't de-activate the automatic CUDA build.
Is there a way to make without CUDA?
Thanks.

EDIT : Ok nevermind, there's a hidden USE_CUDA parameter.
You can build without CUDA by using

cmake .. -DUSE_CUDA=0

@olivierbichler-cea
It could be nice to be told in the documentation that it's possible, maybe even un-hide this parameter, i think many users would be glad to compile N2D2 for a CPU-only use.

SNN inference

Hello,

When I try to do inference in the spiking domain with LeNet example present in "models/" (first, learn the model in frame domain using "LeNet.ini", and then test it in spike domain using "LeNet_spike.ini"), I have the following error:

Final recognition rate: 99.13% (error rate: 0.87%)
Sensitivity: 99.12% / Specificity: 99.90% / Precision: 99.12%
Accuracy: 99.83% / F1-score: 99.12% / Informedness: 99.02%

Importing weights from directory 'weights_range_normalized'.
Time elapsed: 5.12 s
Error: Could not open synaptic file: weights_range_normalized/conv1_weights.syntxt

When I verified in the folder where the simulation is held, I've seen that there is no directory called "weights_range_normalized". In the previous versions of N2D2, there was a folder with that name. I think that some changes made the directory disappeared and which caused this error.

Thank you for helping,
Nassim.

How to use a custom training dataset in N2D2 ?

Hello !

I have red in N2D2 User Guide that it was possible to use a custom or handmade database for NN training. But I can't find further information on how to integrate those custom datanases in the said documentation, and how to use them for training.
Do you have further informations ?
Thank you by advance,

Best regards,

Edgar LEMAIRE
(Grenoble INP - Phelma & UGA, Internship @ LEAT (CNRS) w/ Spintec (CEA))

Error: Could not open INI file

Hi,
Can anyone please help me in removing following error?

prachi@prachi-V530-15ICB:~/N2D2$ ./build/bin/n2d2 .models/mnist24_16c4s2_24c5s2_150_10.ini -learn 600000 -log 10000
Option -log: number of steps between logs [10000]
Option -learn: number of backprop learning steps [600000]
Loading network configuration file .models/mnist24_16c4s2_24c5s2_150_10.ini
Notice: Unused section in INI file
Time elapsed: 0.000194836 s
Error: Could not open INI file: .models/mnist24_16c4s2_24c5s2_150_10.ini

The build process is not clear

In the manual, the instructions for the compilation use cmake, but there is also a Makefile in the main directory. Do we get the same results from both methods ?
Moreover, in both cases, there is no install targets. How can we finish the installation after the compilation ?

Build on Windows with MinGW

Hi,

I tried to build on Windows using MinGW. I would like to avoid Visual Studio.

CMake configure and generate is ok, but an error occurs when building:

$ mingw32-make
[  0%] Building CXX object CMakeFiles/N2D2-OS.dir/src/Activation/LogisticActivation_Frame.cpp.obj
N2D2\src\Activation\LogisticActivation_Frame.cpp:1:0: warning: -fPIC ignored for target (all code is position independent)
 /*
 ^
In file included from N2D2/include/Environment.hpp:31:0,
                 from N2D2/include/Activation/Activation.hpp:24,
                 from N2D2/include/Activation/LogisticActivation.hpp:24,
                 from N2D2/include/Activation/LogisticActivation_Frame.hpp:24,
                 from N2D2\src\Activation\LogisticActivation_Frame.cpp:21:
N2D2/include/Network.hpp:43:22: fatal error: execinfo.h: No such file or directory
compilation terminated.
CMakeFiles\N2D2-OS.dir\build.make:62: recipe for target 'CMakeFiles/N2D2-OS.dir/src/Activation/LogisticActivation_Frame.cpp.obj' failed
mingw32-make[2]: *** [CMakeFiles/N2D2-OS.dir/src/Activation/LogisticActivation_Frame.cpp.obj] Error 1
CMakeFiles\Makefile2:66: recipe for target 'CMakeFiles/N2D2-OS.dir/all' failed
mingw32-make[1]: *** [CMakeFiles/N2D2-OS.dir/all] Error 2
makefile:128: recipe for target 'all' failed
mingw32-make: *** [all] Error 2

Would you have any suggestion to solve it ? Or am I stuck with VIsualStudio ?

How to use the Block Design generated by C_HLS export?

Hello

After generating the HLS export and running vivado_hls on it, I'm trying to use the generated IP on Vivado, but the block design has a huge number of inputs (in_data_X_Y_q, in_data_X_Y_we, in_data_X_Y_ce, in_data_X_Y_address, in_data_X_Y_d and etc) and I'm not really sure how to connect them. What is the recommended way to do it?

Thank you

Cannot run aer_viewer

Hello,
I tried to run the aer_viewer to convert a video file(.mp4) to a spike encoding file (.dat) but i got the same error for different video. Do you know where does it come from? Thanks

./aer_viewer /home/vankhoa/IMRA_le/2_Dataset/Video/CarRacing.mp4
[loadVideo] 105960 events @ frame #6
[loadVideo] 213185 events @ frame #12
[loadVideo] 312463 events @ frame #19
[loadVideo] 402241 events @ frame #27
[loadVideo] 514536 events @ frame #35
[loadVideo] 607501 events @ frame #44
[loadVideo] 725512 events @ frame #51
[loadVideo] 830999 events @ frame #54
[loadVideo] 917354 events @ frame #57
[loadVideo] 1418771 events @ frame #63
[loadVideo] 1504498 events @ frame #79
[loadVideo] 1605628 events @ frame #90
[loadVideo] 1720633 events @ frame #97
[loadVideo] 1810046 events @ frame #101
[loadVideo] 1903983 events @ frame #105
[loadVideo] 2034220 events @ frame #109
[loadVideo] 2113013 events @ frame #111
[loadVideo] 2228206 events @ frame #113
[loadVideo] 2328113 events @ frame #115
[loadVideo] 2467243 events @ frame #117
[loadVideo] 2530825 events @ frame #118
[loadVideo] 2668626 events @ frame #120
[loadVideo] 2768268 events @ frame #121
[loadVideo] 2844870 events @ frame #122
[loadVideo] 2925517 events @ frame #123
[loadVideo] 3010611 events @ frame #124
[loadVideo] 3129085 events @ frame #125
[loadVideo] 3222748 events @ frame #126
[loadVideo] 3472654 events @ frame #127
[loadVideo] 3525156 events @ frame #128
[loadVideo] 3608237 events @ frame #129
[loadVideo] 3703481 events @ frame #130
[loadVideo] 3807519 events @ frame #131
[loadVideo] 3924877 events @ frame #132
[loadVideo] 4053878 events @ frame #133
[loadVideo] 4177321 events @ frame #134
[loadVideo] 4287677 events @ frame #135
[loadVideo] 4384902 events @ frame #136
[loadVideo] 4462597 events @ frame #137
[loadVideo] 4527011 events @ frame #138
[loadVideo] 4626772 events @ frame #140
[loadVideo] 4702669 events @ frame #142
[loadVideo] 4807661 events @ frame #146
[loadVideo] 4900168 events @ frame #153
[loadVideo] 5003927 events @ frame #158
[loadVideo] 5119599 events @ frame #162
[loadVideo] 5224249 events @ frame #165
[loadVideo] 5331540 events @ frame #168
[loadVideo] 5405325 events @ frame #170
[loadVideo] 5516684 events @ frame #173
[loadVideo] 5619117 events @ frame #176
[loadVideo] 5702215 events @ frame #179
[loadVideo] 5942791 events @ frame #191
[loadVideo] 6037088 events @ frame #193
[loadVideo] 6150341 events @ frame #195
[loadVideo] 6210484 events @ frame #196
[loadVideo] 6323462 events @ frame #198
[loadVideo] 6428707 events @ frame #200
[loadVideo] 6529000 events @ frame #202
[loadVideo] 6623711 events @ frame #204
[loadVideo] 6710916 events @ frame #206
[loadVideo] 6828724 events @ frame #209
[loadVideo] 6926885 events @ frame #212
[loadVideo] 7006119 events @ frame #215
[loadVideo] 7120676 events @ frame #219
[loadVideo] 7213157 events @ frame #222
[loadVideo] 7307352 events @ frame #225
[loadVideo] 7404645 events @ frame #228
[loadVideo] 7502554 events @ frame #231
[loadVideo] 7600054 events @ frame #234
[loadVideo] 7728913 events @ frame #238
[loadVideo] 7820453 events @ frame #241
[loadVideo] 7905421 events @ frame #244
[loadVideo] 8013181 events @ frame #248
[loadVideo] 8112428 events @ frame #252
[loadVideo] 8433041 events @ frame #256
[loadVideo] 8500334 events @ frame #257
[loadVideo] 8644278 events @ frame #260
[loadVideo] 8709933 events @ frame #261
[loadVideo] 8862905 events @ frame #263
[loadVideo] 8962332 events @ frame #264
[loadVideo] 9059037 events @ frame #265
[loadVideo] 9157629 events @ frame #266
[loadVideo] 9271274 events @ frame #267
[loadVideo] 9383843 events @ frame #268
[loadVideo] 9489969 events @ frame #269
[loadVideo] 9598292 events @ frame #270
[loadVideo] 9716339 events @ frame #271
[loadVideo] 9845081 events @ frame #272
[loadVideo] 9984944 events @ frame #273
[loadVideo] 10110994 events @ frame #274
[loadVideo] 10238488 events @ frame #275
[loadVideo] 10368593 events @ frame #276
[loadVideo] 10501643 events @ frame #277
[loadVideo] 10651288 events @ frame #278
[loadVideo] 10805446 events @ frame #279
[loadVideo] 10966958 events @ frame #280
[loadVideo] 11135769 events @ frame #281
[loadVideo] 11301621 events @ frame #282
[loadVideo] 11461328 events @ frame #283
[loadVideo] 11613811 events @ frame #284
[loadVideo] 11758238 events @ frame #285
[loadVideo] 11885621 events @ frame #286
[loadVideo] 12023964 events @ frame #287
[loadVideo] 12174945 events @ frame #288
[loadVideo] 12344446 events @ frame #289
[loadVideo] 12518383 events @ frame #290
[loadVideo] 12683122 events @ frame #291
[loadVideo] 12829047 events @ frame #292
[loadVideo] 12944896 events @ frame #293
[loadVideo] 13040210 events @ frame #294
[loadVideo] 13130563 events @ frame #295
[loadVideo] 13229910 events @ frame #296
[loadVideo] 13327394 events @ frame #297
[loadVideo] 13418995 events @ frame #298
[loadVideo] 13519431 events @ frame #299
[loadVideo] 13618174 events @ frame #300
[loadVideo] 13723592 events @ frame #301
[loadVideo] 13832285 events @ frame #302
[loadVideo] 13941679 events @ frame #303
[loadVideo] 14051150 events @ frame #304
[loadVideo] 14161444 events @ frame #305
[loadVideo] 14272056 events @ frame #306
[loadVideo] 14387114 events @ frame #307
[loadVideo] 14500498 events @ frame #308
[loadVideo] 14619125 events @ frame #309
[loadVideo] 14742671 events @ frame #310
[loadVideo] 14867675 events @ frame #311
[loadVideo] 14983487 events @ frame #312
[loadVideo] 15106920 events @ frame #313
[loadVideo] 15230996 events @ frame #314
[loadVideo] 15342031 events @ frame #315
[loadVideo] 15450296 events @ frame #316
[loadVideo] 15555563 events @ frame #317
[loadVideo] 15660622 events @ frame #318
[loadVideo] 15762221 events @ frame #319
[loadVideo] 15860725 events @ frame #320
[loadVideo] 16252983 events @ frame #321
[loadVideo] 16326199 events @ frame #322
[loadVideo] 16412483 events @ frame #323
[loadVideo] 16500786 events @ frame #324
[loadVideo] 16700166 events @ frame #326
[loadVideo] 16807643 events @ frame #327
[loadVideo] 16916136 events @ frame #328
[loadVideo] 17026942 events @ frame #329
[loadVideo] 17146240 events @ frame #330
[loadVideo] 17273283 events @ frame #331
[loadVideo] 17395781 events @ frame #332
[loadVideo] 17509241 events @ frame #333
[loadVideo] 17607038 events @ frame #334
[loadVideo] 17703601 events @ frame #335
[loadVideo] 17818522 events @ frame #336
[loadVideo] 17988432 events @ frame #338
[loadVideo] 18102804 events @ frame #339
[loadVideo] 18287432 events @ frame #341
[loadVideo] 18396012 events @ frame #342
[loadVideo] 18494542 events @ frame #343
[loadVideo] 18565251 events @ frame #344
[loadVideo] 18623351 events @ frame #345
[loadVideo] 18753246 events @ frame #347
[loadVideo] 18855097 events @ frame #348
[loadVideo] 18951120 events @ frame #349
[loadVideo] 19040056 events @ frame #350
[loadVideo] 19143653 events @ frame #351
[loadVideo] 19227474 events @ frame #352
[loadVideo] 19312171 events @ frame #353
[loadVideo] 19403188 events @ frame #354
[loadVideo] 19565724 events @ frame #356
[loadVideo] 19624472 events @ frame #357
[loadVideo] 19750783 events @ frame #359
[loadVideo] 19804955 events @ frame #360
[loadVideo] 19923199 events @ frame #362
[loadVideo] 20068925 events @ frame #364
[loadVideo] 20154581 events @ frame #365
[loadVideo] 20238341 events @ frame #366
[loadVideo] 20314849 events @ frame #367
[loadVideo] 20458499 events @ frame #369
[loadVideo] 20536609 events @ frame #370
[loadVideo] 20613164 events @ frame #371
[loadVideo] 20770453 events @ frame #373
[loadVideo] 20853583 events @ frame #374
[loadVideo] 20934400 events @ frame #375
[loadVideo] 21008153 events @ frame #376
[loadVideo] 21150716 events @ frame #378
[loadVideo] 21213032 events @ frame #379
[loadVideo] 21334866 events @ frame #382
[loadVideo] 21540729 events @ frame #383
[loadVideo] 21624772 events @ frame #384
[loadVideo] 21721996 events @ frame #385
[loadVideo] 21803476 events @ frame #386
[loadVideo] 21978467 events @ frame #388
[loadVideo] 22071163 events @ frame #389
[loadVideo] 22149866 events @ frame #390
[loadVideo] 22241059 events @ frame #391
[loadVideo] 22317481 events @ frame #392
[loadVideo] 22409231 events @ frame #393
[loadVideo] 22577762 events @ frame #395
[loadVideo] 22653737 events @ frame #396
[loadVideo] 22752747 events @ frame #397
[loadVideo] 22838749 events @ frame #398
[loadVideo] 22943874 events @ frame #399
[loadVideo] 23032439 events @ frame #400
[loadVideo] 23144945 events @ frame #401
[loadVideo] 23237322 events @ frame #402
[loadVideo] 23351716 events @ frame #403
[loadVideo] 23444718 events @ frame #404
[loadVideo] 23556049 events @ frame #405
[loadVideo] 23647744 events @ frame #406
[loadVideo] 23761999 events @ frame #407
[loadVideo] 23847649 events @ frame #408
[loadVideo] 23954873 events @ frame #409
[loadVideo] 24035341 events @ frame #410
[loadVideo] 24136954 events @ frame #411
[loadVideo] 24213828 events @ frame #412
[loadVideo] 24310597 events @ frame #413
[loadVideo] 24478431 events @ frame #415
[loadVideo] 24542127 events @ frame #416
[loadVideo] 24627053 events @ frame #417
[loadVideo] 24779156 events @ frame #419
[loadVideo] 24846304 events @ frame #420
[loadVideo] 24925442 events @ frame #421
[loadVideo] 25069358 events @ frame #423
[loadVideo] 25133004 events @ frame #424
[loadVideo] 25220160 events @ frame #425
[loadVideo] 25380081 events @ frame #427
[loadVideo] 25459277 events @ frame #428
[loadVideo] 25564758 events @ frame #429
[loadVideo] 25647075 events @ frame #430
[loadVideo] 25753010 events @ frame #431
[loadVideo] 25827595 events @ frame #432
[loadVideo] 25908770 events @ frame #433
[loadVideo] 26046408 events @ frame #435
[loadVideo] 26115979 events @ frame #436
[loadVideo] 26255518 events @ frame #438
[loadVideo] 26342442 events @ frame #439
[loadVideo] 26417497 events @ frame #440
[loadVideo] 26512433 events @ frame #441
[loadVideo] 26695802 events @ frame #443
[loadVideo] 26782077 events @ frame #444
[loadVideo] 26891463 events @ frame #445
[loadVideo] 26975304 events @ frame #446
[loadVideo] 27064264 events @ frame #447
[loadVideo] 27137823 events @ frame #448
[loadVideo] 27227194 events @ frame #449
[loadVideo] 27378767 events @ frame #451
[loadVideo] 27438639 events @ frame #452
[loadVideo] 27505650 events @ frame #453
[loadVideo] 27623536 events @ frame #455
[loadVideo] 27726022 events @ frame #457
[loadVideo] 27821874 events @ frame #459
[loadVideo] 27922967 events @ frame #461
[loadVideo] 28049823 events @ frame #463
[loadVideo] 28103832 events @ frame #464
[loadVideo] 28207128 events @ frame #466
[loadVideo] 28310339 events @ frame #468
[loadVideo] 28409526 events @ frame #470
[loadVideo] 28508961 events @ frame #472
[loadVideo] 28640780 events @ frame #475
[loadVideo] 28731116 events @ frame #477
[loadVideo] 28835356 events @ frame #479
[loadVideo] 28938819 events @ frame #481
[loadVideo] 29036369 events @ frame #483
[loadVideo] 29170246 events @ frame #485
[loadVideo] 29250601 events @ frame #486
[loadVideo] 29357602 events @ frame #487
[loadVideo] 29445605 events @ frame #488
[loadVideo] 29557374 events @ frame #489
[loadVideo] 29650315 events @ frame #490
[loadVideo] 29758073 events @ frame #491
[loadVideo] 29846031 events @ frame #492
[loadVideo] 29947621 events @ frame #493
[loadVideo] 30017256 events @ frame #494
[loadVideo] 30107124 events @ frame #495
[loadVideo] 30281468 events @ frame #497
[loadVideo] 30359036 events @ frame #498
[loadVideo] 30453659 events @ frame #499
[loadVideo] 30528997 events @ frame #500
[loadVideo] 30616991 events @ frame #501
[loadVideo] 30766444 events @ frame #503
[loadVideo] 30832606 events @ frame #504
[loadVideo] 30911260 events @ frame #505
[loadVideo] 31089970 events @ frame #507
[loadVideo] 31173906 events @ frame #508
[loadVideo] 31271953 events @ frame #509
[loadVideo] 31344186 events @ frame #510
[loadVideo] 31423918 events @ frame #511
[loadVideo] 31542834 events @ frame #513
[loadVideo] 31660783 events @ frame #515
[loadVideo] 31710687 events @ frame #516
[loadVideo] 31823532 events @ frame #518
[loadVideo] 31926135 events @ frame #520
[loadVideo] 32034186 events @ frame #522
[loadVideo] 32148073 events @ frame #524
[loadVideo] 32206747 events @ frame #525
[loadVideo] 32306956 events @ frame #527
[loadVideo] 32445594 events @ frame #530
[loadVideo] 32545594 events @ frame #532
[loadVideo] 32631548 events @ frame #534
[loadVideo] 32714059 events @ frame #536
[loadVideo] 32801951 events @ frame #538
[loadVideo] 32941734 events @ frame #541
[loadVideo] 33029315 events @ frame #543
[loadVideo] 33115156 events @ frame #545
[loadVideo] 33206271 events @ frame #547
[loadVideo] 33340660 events @ frame #550
[loadVideo] 33433055 events @ frame #552
[loadVideo] 33520251 events @ frame #554
[loadVideo] 33604808 events @ frame #556
[loadVideo] 33728997 events @ frame #559
[loadVideo] 33814113 events @ frame #561
[loadVideo] 33901166 events @ frame #563
[loadVideo] 34194208 events @ frame #564
[loadVideo] 34283492 events @ frame #565
[loadVideo] 34372971 events @ frame #566
[loadVideo] 34464628 events @ frame #567
[loadVideo] 34556318 events @ frame #568
[loadVideo] 34652337 events @ frame #569
[loadVideo] 34749267 events @ frame #570
[loadVideo] 34847042 events @ frame #571
[loadVideo] 34944537 events @ frame #572
[loadVideo] 35039778 events @ frame #573
[loadVideo] 35136293 events @ frame #574
[loadVideo] 35230508 events @ frame #575
[loadVideo] 35323981 events @ frame #576
[loadVideo] 35418181 events @ frame #577
[loadVideo] 35511403 events @ frame #578
[loadVideo] 35606815 events @ frame #579
[loadVideo] 35700148 events @ frame #580
[loadVideo] 35884510 events @ frame #582
[loadVideo] 35971729 events @ frame #583
[loadVideo] 36056295 events @ frame #584
[loadVideo] 36139166 events @ frame #585
[loadVideo] 36221823 events @ frame #586
[loadVideo] 36303630 events @ frame #587
[loadVideo] 36451755 events @ frame #589
[loadVideo] 36502971 events @ frame #590
[loadVideo] 36620146 events @ frame #592
[loadVideo] 36777794 events @ frame #594
[loadVideo] 36873591 events @ frame #595
[loadVideo] 36975795 events @ frame #596
[loadVideo] 37082183 events @ frame #597
[loadVideo] 37188527 events @ frame #598
[loadVideo] 37305096 events @ frame #599
[loadVideo] 37426919 events @ frame #600
[loadVideo] 37547345 events @ frame #601
[loadVideo] 37672996 events @ frame #602
[loadVideo] 37800847 events @ frame #603
[loadVideo] 37935342 events @ frame #604
[loadVideo] 38069473 events @ frame #605
[loadVideo] 38202264 events @ frame #606
[loadVideo] 38332059 events @ frame #607
[loadVideo] 38456887 events @ frame #608
[loadVideo] 38588201 events @ frame #609
[loadVideo] 38719167 events @ frame #610
[loadVideo] 38855113 events @ frame #611
[loadVideo] 39252517 events @ frame #612
[loadVideo] 39337247 events @ frame #613
[loadVideo] 39415991 events @ frame #614
[loadVideo] 39559446 events @ frame #616
[loadVideo] 39628328 events @ frame #617
[loadVideo] 39700079 events @ frame #618
[loadVideo] 39847276 events @ frame #620
[loadVideo] 39916518 events @ frame #621
[loadVideo] 40053534 events @ frame #623
[loadVideo] 40104279 events @ frame #624
[loadVideo] 40241539 events @ frame #626
[loadVideo] 40305041 events @ frame #627
[loadVideo] 40451050 events @ frame #629
[loadVideo] 40529688 events @ frame #630
[loadVideo] 40605992 events @ frame #631
[loadVideo] 40743004 events @ frame #633
[loadVideo] 40821034 events @ frame #634
[loadVideo] 40970625 events @ frame #636
[loadVideo] 41047568 events @ frame #637
[loadVideo] 41123666 events @ frame #638
[loadVideo] 41211454 events @ frame #639
[loadVideo] 41344521 events @ frame #641
[loadVideo] 41411018 events @ frame #642
[loadVideo] 41541624 events @ frame #644
[loadVideo] 41601244 events @ frame #645
[loadVideo] 41730928 events @ frame #647
[loadVideo] 41864585 events @ frame #649
[loadVideo] 41929036 events @ frame #650
[loadVideo] 42004788 events @ frame #651
[loadVideo] 42128402 events @ frame #653
[loadVideo] 42257942 events @ frame #655
[loadVideo] 42309184 events @ frame #656
[loadVideo] 42439991 events @ frame #658
[loadVideo] 42571027 events @ frame #660
[loadVideo] 42896101 events @ frame #661
[loadVideo] 42910563 events @ frame #662
[loadVideo] 43012987 events @ frame #667
[loadVideo] 43114180 events @ frame #672
[loadVideo] 43213062 events @ frame #678
[loadVideo] 43307629 events @ frame #686
[loadVideo] 43407795 events @ frame #697
[loadVideo] 43508991 events @ frame #706
[loadVideo] 43742623 events @ frame #709
[loadVideo] 43800888 events @ frame #714
[loadVideo] 43905770 events @ frame #721
[loadVideo] 44002551 events @ frame #726
[loadVideo] 44106586 events @ frame #730
[loadVideo] 44228696 events @ frame #734
[loadVideo] 44316932 events @ frame #737
[loadVideo] 44407518 events @ frame #741
[loadVideo] 44506145 events @ frame #752
[loadVideo] 44605290 events @ frame #761
[loadVideo] 44705648 events @ frame #768
[loadVideo] 44812483 events @ frame #774
[loadVideo] 44900211 events @ frame #781
[loadVideo] 45031648 events @ frame #798
[loadVideo] 45115600 events @ frame #800
[loadVideo] 45212396 events @ frame #804
[loadVideo] 45304109 events @ frame #813
[loadVideo] 45404029 events @ frame #822
[loadVideo] 45830152 events @ frame #835
[loadVideo] 45932461 events @ frame #838
[loadVideo] 46000573 events @ frame #840
[loadVideo] 46126933 events @ frame #844
[loadVideo] 46208923 events @ frame #847
[loadVideo] 46302339 events @ frame #851
[loadVideo] 46410207 events @ frame #857
[loadVideo] 46516388 events @ frame #863
[loadVideo] 46602131 events @ frame #867
[loadVideo] 46716089 events @ frame #872
[loadVideo] 46818998 events @ frame #876
[loadVideo] 46916410 events @ frame #880
[loadVideo] 47363918 events @ frame #885
[loadVideo] 47412902 events @ frame #898
[loadVideo] 47561337 events @ frame #901
[loadVideo] 47625200 events @ frame #902
[loadVideo] 47712966 events @ frame #903
[loadVideo] 47805432 events @ frame #904
[loadVideo] 47982576 events @ frame #906
[loadVideo] *** 47982576 events generated for 907 frames (36.28 s)
Floating point exception [floating point invalid operation]
backtrace() returned 22 addresses
./aer_viewer[0x600945]
/lib/x86_64-linux-gnu/libc.so.6(+0x354b0)[0x7fdbd323f4b0]
/usr/lib/x86_64-linux-gnu/libQt5XcbQpa.so.5(_ZN14QXcbConnection16touchDeviceForIdEi+0x328)[0x7fdbbe58af08]
/usr/lib/x86_64-linux-gnu/libQt5XcbQpa.so.5(_ZN14QXcbConnection15xi2SetupDevicesEv+0x118d)[0x7fdbbe58c44d]
/usr/lib/x86_64-linux-gnu/libQt5XcbQpa.so.5(_ZN14QXcbConnection17initializeXInput2Ev+0x126)[0x7fdbbe58ca86]
/usr/lib/x86_64-linux-gnu/libQt5XcbQpa.so.5(_ZN14QXcbConnectionC1EP19QXcbNativeInterfacebjPKc+0x301)[0x7fdbbe566891]
/usr/lib/x86_64-linux-gnu/libQt5XcbQpa.so.5(_ZN15QXcbIntegrationC1ERK11QStringListRiPPc+0x2ad)[0x7fdbbe569bad]
/usr/lib/x86_64-linux-gnu/qt5/plugins/platforms/libqxcb.so(+0x13ad)[0x7fdbbe63a3ad]
/usr/lib/x86_64-linux-gnu/libQt5Gui.so.5(ZN27QPlatformIntegrationFactory6createERK7QStringRK11QStringListRiPPcS2+0xa2)[0x7fdbd23d2d92]
/usr/lib/x86_64-linux-gnu/libQt5Gui.so.5(_ZN22QGuiApplicationPrivate25createPlatformIntegrationEv+0x4f4)[0x7fdbd23defc4]
/usr/lib/x86_64-linux-gnu/libQt5Gui.so.5(_ZN22QGuiApplicationPrivate21createEventDispatcherEv+0x2d)[0x7fdbd23dfecd]
/usr/lib/x86_64-linux-gnu/libQt5Core.so.5(_ZN16QCoreApplication4initEv+0x4f6)[0x7fdbd20a77e6]
/usr/lib/x86_64-linux-gnu/libQt5Core.so.5(_ZN16QCoreApplicationC1ER23QCoreApplicationPrivate+0x26)[0x7fdbd20a7856]
/usr/lib/x86_64-linux-gnu/libQt5Gui.so.5(_ZN15QGuiApplicationC1ER22QGuiApplicationPrivate+0x9)[0x7fdbd23e1cc9]
/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5(_ZN12QApplicationC2ERiPPci+0x3d)[0x7fdbd2998bcd]
/usr/local/lib/libopencv_highgui.so.3.2(+0x15c60)[0x7fdbd523ac60]
/usr/local/lib/libopencv_highgui.so.3.2(+0x18462)[0x7fdbd523d462]
/usr/local/lib/libopencv_highgui.so.3.2(cvNamedWindow+0x120)[0x7fdbd524a500]
./aer_viewer[0x919510]
./aer_viewer[0x439051]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fdbd322a830]
./aer_viewer[0x439799]

Test case "CsvDataFile" FAILED

Hi

I am building N2D2 without CUDA on ubuntu 14.04
The problem is echoed below:

`[ 97%] Generating ../bin/tests/class_CsvDataFile.passed
Test case "CsvDataFile" FAILED with 0 error(s), 2 failure(s), 10 success(es):
Test "read" FAILED (5/6):
Failure: "mat.at(i, 1) != 0.5 + i * 0.1 with values 0.6 != 0.6" line 46 in file /home/hamid/N2D2/N2D2-master/tests/DataFile/class_CsvDataFile.cpp
Test "write" FAILED (5/6):
Failure: "mat.at(i, 1) != 0.5 + i * 0.1 with values 0.6 != 0.6" line 70 in file /home/hamid/N2D2/N2D2-master/tests/DataFile/class_CsvDataFile.cpp

Test summary: FAIL
Number of test cases ran: 1
Tests that succeeded: 10
Tests with errors: 0
Tests that failed: 2
Tests skipped: 0
make[2]: *** [bin/tests/class_CsvDataFile.passed] Error 1
make[1]: *** [tests/CMakeFiles/run_class_CsvDataFile.dir/all] Error 2
make: *** [all] Error 2
`

how to use N2D2 to run on a new AI soc?

I have use N2D2 to run on Nvidia Telsa P4 GPU, and I want to run on a new AI soc,in order to compare the performance of them.
How can I use N2D2 to run on a new AI soc,it is also a PCIE card.

Missing Modules for C_HLS Export

Hi. I'm trying to use the tool to export the trained model into C_HLS codes as shown in the manual. However the tool prompts me that additional modules are needed for the export. I wonder if the module for C_HLS export is missing? It seems the only available export right now is CPP_cuDNN. Thanks.

header guard inconsistency

Hi,

I believe there is a small problem with a header guard mismatch in include/Generator/LSTMCellGenerator.hpp. Lines 22-23 are:

#ifndef N2D2_LSTMCELLGENERATOR_H
#define N2D2_LSTMVCELLGENERATOR_H

The same name should be used on both lines.

Batch Normalization

Hello, I am having a little bit of trouble understanding how to use this kind of layer and I haven't found any examples...

What I want to do is to create the following block: Conv (without activation) -> BatchNorm -> ReLU .

As I understand it, the output size after normalization should remain unchanged, so I'm not sure about the NbOutputs option. It would help to see an example of an .ini file using this kind of combination.

Thank you!

Cannot run Spike example

Hello,
I tried to run the example to convert CNN to SNN (LeNet), by these two commands:
../build/bin/n2d2 "../models/LeNet.ini" -learn 6000000 -log 100000
../build/bin/n2d2 "../models/LeNet_Spike.ini" -test

And I ran into this error:
Option -test: perform testing
Loading network configuration file ../models/LeNet_Spike.ini
Layer: conv1 [Conv(Transcode_CUDA)]
Notice: Could not open configuration file: conv1.cfg
Notice: Unused section fc2.Target in INI file
Notice: Unused section fc2.config in INI file
Time elapsed: 0.882318 s
Error: Cell_Spike::addInput(): mapping length must be equal to the number of outputs

Do you know where does it come from?
Thanks

Error in Build process

During the build process I'm getting the following

/N2D2-master/src/DeepNet.cpp: In member function ‘void N2D2::DeepNet::cTicks(N2D2::Time_T, N2D2::Time_T, N2D2::Time_T, bool)’:
/home/lonewalker/NN/N2D2-master/src/DeepNet.cpp:2068:13: error: variable ‘itBegin’ set but not used [-Werror=unused-but-set-variable]
itBegin = mLayers.begin(),
^~~~~~~
/home/lonewalker/NN/N2D2-master/src/DeepNet.cpp: In member function ‘void N2D2::DeepNet::initializeCMonitors(unsigned int)’:
/home/lonewalker/NN/N2D2-master/src/DeepNet.cpp:2157:5: error: variable ‘itBegin’ set but not used [-Werror=unused-but-set-variable]
itBegin = mLayers.begin(),
^~~~~~~
cc1plus: all warnings being treated as errors
CMakeFiles/N2D2-OS.dir/build.make:3302: recipe for target 'CMakeFiles/N2D2-OS.dir/src/DeepNet.cpp.o' failed
make[2]: *** [CMakeFiles/N2D2-OS.dir/src/DeepNet.cpp.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/N2D2-OS.dir/all' failed
make[1]: *** [CMakeFiles/N2D2-OS.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

Import CSV

The import process of own databases is not very clear.
From my understanding each input dataset has to be saved in a separate file.
Is that right?

Is it possible to import a database, where the inputs are saved in one file,
and the desired labels are saved in a separate file?

weights range exceed [-1,1]

Hello,
When I train a network with frame_CUDA, and then generate the folder weight_range_normalize for test with spike, sometimes the weight exceed the range [-1,1] and I can not use these weights for spike inference. Do you know about this?
Thanks

Example INI file for CIFAR-10 Dataset

Hello,

I am currently trying to build a network for CIFAR-10 classification. I have tried some different topologies, but I only reach 66,7% on the test dataset (91% on learning) after 48 hours of training. Those topologies were inspired from tensorflow descriptions I could find on the net, but I am quite sure I am missing something as accuracy on test example should be much higher.

Consequently, I was wondering if you had an example INI file for CIFAR-10 (I didn't find any in the models directory) which provides satisfying results.

Thanks in advance,

Regards,

Edgar Lemaire
(PhD student @ Thales Research Technology and LEAT lab, CNRS & Université Cote d'Azur)

building DeconvCell_Frame_CUDA.cpp.o failed on ubuntu1604

Did I missed something ?
here is (relevant part of) the log :

[ 10%] Building CXX object CMakeFiles/N2D2-OS.dir/src/Cell/DeconvCell_Frame_CUDA.cpp.o
In file included from /home/fred/dev/n2d2/N2D2/include/CudaContext.hpp:27:0,
                 from /home/fred/dev/n2d2/N2D2/include/Activation/LogisticActivation_Frame_CUDA.hpp:26,
                 from /home/fred/dev/n2d2/N2D2/include/Cell/Cell_Frame_CUDA.hpp:24,
                 from /home/fred/dev/n2d2/N2D2/include/Cell/DeconvCell_Frame_CUDA.hpp:25,
                 from /home/fred/dev/n2d2/N2D2/src/Cell/DeconvCell_Frame_CUDA.cpp:23:
/home/fred/dev/n2d2/N2D2/src/Cell/DeconvCell_Frame_CUDA.cpp: In member function ‘virtual void N2D2::DeconvCell_Frame_CUDA::initialize()’:
/home/fred/dev/n2d2/N2D2/src/Cell/DeconvCell_Frame_CUDA.cpp:78:64: error: too few arguments to function ‘cudnnStatus_t cudnnSetConvolution2dDescriptor(cudnnConvolutionDescriptor_t, int, int, int, int, int, int, cudnnConvolutionMode_t, cudnnDataType_t)’
                                         CUDNN_CROSS_CORRELATION));

gcc 5.4.0, linux 4.4.0-66, cuda 8.0, libcudnn 6.0.

Cheers.

Downloading MNIST dataset

Hello I did as you said for #52 . Now when I try to run install_stimuli_gui.py , I get a MNIST file of size of 66 MB , whereas I should get it around 110MB (as per the message on gui). May be this is causing #51 . Can anyone plsease help me on this?

C_HLS export issue

when I run 'make ./bin/n2d2_test' , I get this displayed,
'cc' is not recognized as an internal or external command,
operable program or batch file.
make: `bin/n2d2_test' is up to date.

How can I fix this?

Error "run the learning"

Hello,
could you help me solve this error of Run the learning

~/N2D2-master/build/bin$ ./n2d2 "mnist24_16c4s2_24c5s2_150_10.ini" -learn 60000 -log 10000
Option -log: number of steps between logs [10000]
Option -learn: number of backprop learning steps [60000]
Loading network configuration file mnist24_16c4s2_24c5s2_150_10.ini
Warning: to use Frame_CUDA models, N2D2 must be compiled with CUDA enabled.
*** Using Frame model instead. ***
Notice: Unused section sp in INI file
Notice: Unused section sp.Transformation in INI file
Notice: Unused section conv1 in INI file
Notice: Unused section conv2 in INI file
Notice: Unused section fc1 in INI file
Notice: Unused section fc1.drop in INI file
Notice: Unused section fc2 in INI file
Notice: Unused section softmax in INI file
Notice: Unused section softmax.Target in INI file
Notice: Unused section common.config in INI file
Time elapsed: 0.000491815 s
Error: Could not open images file: /local/qwdr7777/n2d2_data/mnist/train-images-idx3-ubyte

Spike generator reference

Hello, thanks for the support, I just want to know where is the reference paper for the Poissonian spike generator that you used in N2D2.
As my understanding, the intensity is used to calculate the delay, and the variable delay will decide the periodmean is closer to the minmeanperiod or maxminperiod. Please correct me if I'm wrong. Thank you

Existing applications

Hi,

I want to see how N2D2 works in general. You have provided few application executable. Could you please tell me how can i export them with Vivado HLS. I am very new to this. If possible please tell me this from start to end.

Inference on SNN model for event dataset (NMNIST) with weights trained from ANN model on MNIST

Hello, I'm trying an experiment like training an ANN (1 layer FC) with MNIST, and then import the weights to a SNN with same structure to do inference on NMNIST. But there are two many options in the ini file that I don't understand, like should I choose the base model of the SNN as Transcode, or as Spike_Analog. And then there are two channels in the SNN, so the model is not similar.

I post the two ini file here, could you help me to check it.

the training ini to train ANN with back propagation on MNIST:

; Command:
; ./n2d2.sh "$N2D2_MODELS/LeNet.ini" -learn 6000000 -log 100000

DefaultModel=Frame_CUDA

; Database
[database]
Type=MNIST_IDX_Database
Validation=0.2 ; Use 20% of the dataset for validation

; Environment
[sp]
SizeX=32
SizeY=32
BatchSize=128

[sp.Transformation_1]
Type=ChannelExtractionTransformation
CSChannel=Gray

[sp.Transformation_2]
Type=RescaleTransformation
Width=[sp]SizeX
Height=[sp]SizeY

; Output layer (fully connected)
[fc1]
Input=sp
Type=Fc
NbOutputs=500
ActivationFunction=Rectifier
WeightsFiller=HeFiller
ConfigSection=common.config

[fc2]
Input=fc1
Type=Fc
NbOutputs=10
ActivationFunction=Linear
ConfigSection=common.config

[softmax]
Input=fc2
Type=Softmax
NbOutputs=[fc2]NbOutputs
WithLoss=1

[softmax.Target]

; Common config for static model
[common.config]
NoBias=0
WeightsSolver.LearningRate=0.05
WeightsSolver.Momentum=0.9
WeightsSolver.Decay=0.0005
Solvers.LearningRatePolicy=StepDecay
Solvers.LearningRateStepSize=[sp]_EpochSize
Solvers.LearningRateDecay=0.993
Solvers.Clamping=0.0:1.0

The test ini to import the weight from ANN to use on SNN:

DefaultModel=Transcode_CUDA

; Database
[database]
Type=N_MNIST_Database

; Environment
[env]
SizeX=32
SizeY=32
BatchSize=1
NbChannels=2
ConfigSection=env.config

[env.config]
ReadAerData=1

[env.Transformation_1]
Type=RescaleTransformation
Width=[env]SizeX
Height=[env]SizeY

; Output layer (fully connected)
[fc1]
Input=env
Type=Fc
NbOutputs=500
ConfigSection=common.config
ConfigSection(Spike_Analog)=common_Spike_Analog.config
ConfigSection(Spike_RRAM)=common_Spike_RRAM.config

[fc2]
Input=fc1
Type=Fc
NbOutputs=10
ConfigSection=common.config,fc2.config
ConfigSection(Spike_Analog)=common_Spike_Analog.config
ConfigSection(Spike_RRAM)=common_Spike_RRAM.config

[fc2.Target]

[fc2.config]
; Spike-based computing
TerminateDelta=4
BipolarThreshold=1

; Common config for static model
[common.config]
NoBias=0
WeightsSolver.LearningRate=0.005
Threshold=1.0
Refractory=10,000,000

; Common config for Spike_Analog model
[common_Spike_Analog.config]
EnableStdp=0
Threshold=50.0
StdpLtp=50,000,000
InhibitRefractory=1,000
Leak=100,000,000
BipolarIntegration=0
RefractoryIntegration=0
WeightsMinMean="1.0e-4; 10%"
WeightsMaxMean="1.0; 10%"
WeightsRelInit="0.67; 20%"
WeightIncrement="0.0017; 1%"
WeightIncrementDamping=-3
WeightDecrement="0.0007; 1%"
WeightDecrementDamping=0

; Common config for Spike_RRAM model
[common_Spike_RRAM.config]
EnableStdp=0
Threshold=1.52e-2
LtdProba=0.0007
LtpProba=0.0020
StdpLtp=50,000,000
InhibitRefractory=1,000
Leak=100,000,000
BipolarWeights=0
BipolarIntegration=0
RefractoryIntegration=0
WeightsMinMean="1.19e-5; 1.67e-5"
WeightsMaxMean="3.04e-4; 3.86e-5"
WeightsRelInit="2.0e-4; 20%"
WeightsMaxVarSlope=-1.7575
WeightsMaxVarOrigin=-17.1691
WeightsMinVarSlope=-0.0791
WeightsMinVarOrigin=-1.7081
WeightsSetProba="0.843; 0.197"
WeightsResetProba="0.992; 0.00748"
SynapticRedundancy=64

Thanks

Can not learn with CUDA

Hello, when I do the cmake, I had to remove everything in the exec folder except the n2d2.cpp because if not it will lead to this error:
"add_executable cannot create target "n2d2" because another target with the same name already exists."

Then I did the make like usual. But when I test the model I ran on this error with CUDA, and if I set to Frame only, the model does not learn. Do you know what is the problem that I made? Thanks

sudo ./build/bin/n2d2 models/mnist24_16c4s2_24c5s2_150_10.ini -learn 40000000 -log 100000
Option -log: number of steps between logs [100000]
Option -learn: number of backprop learning steps [40000000]
Loading network configuration file models/mnist24_16c4s2_24c5s2_150_10.ini
Layer: conv1 [Conv(Frame_CUDA)]
Notice: Could not open configuration file: conv1.cfg

Shared synapses: 256

Virtual synapses: 30976

Inputs dims: 24 24 1

Outputs dims: 11 11 16

Warning: No monitor could be added to Cell: conv1
Layer: conv2 [Conv(Frame_CUDA)]
Notice: Could not open configuration file: conv2.cfg

Shared synapses: 2250

Virtual synapses: 36000

Inputs dims: 11 11 16

Outputs dims: 4 4 24

Warning: No monitor could be added to Cell: conv2
Layer: fc1 [Fc(Frame_CUDA)]
Notice: Could not open configuration file: fc1.cfg

Synapses: 57600

Inputs dims: 4 4 24

Outputs dims: 1 1 150

Warning: No monitor could be added to Cell: fc1
Layer: fc1.drop [Dropout(Frame_CUDA)]
Notice: Could not open configuration file: fc1.drop.cfg

Inputs dims: 1 1 150

Outputs dims: 1 1 150

Warning: No monitor could be added to Cell: fc1.drop
Layer: fc2 [Fc(Frame_CUDA)]
Notice: Could not open configuration file: fc2.cfg

Synapses: 1500

Inputs dims: 1 1 150

Outputs dims: 1 1 10

Warning: No monitor could be added to Cell: fc2
Layer: softmax [Softmax(Frame_CUDA)]
Notice: Could not open configuration file: softmax.cfg

Inputs dims: 1 1 10

Outputs dims: 1 1 10

Target: softmax (target value: 1 / default value: 0 / top-n value: 1)
Warning: No monitor could be added to Cell: softmax
Total number of neurons: 2640
Total number of nodes: 2640
Total number of synapses: 61606
Total number of virtual synapses: 126076
Total number of connections: 126076
Notice: Unused section softmax.Target in INI file
CUDNN failure: CUDNN_STATUS_NOT_INITIALIZED (1) in /home/kevin/IMRA_le/3_Program/SNN/N2D2/include/CudaContext.hpp:58
Time elapsed: 1.79893 s
Error: CUDNN failure: CUDNN_STATUS_NOT_INITIALIZED (1) in /home/kevin/IMRA_le/3_Program/SNN/N2D2/include/CudaContext.hpp:58

Is OpenCV **2.0.0** really required?

I just cloned the master branch and tried to install N2D2 following the manual. My first try failed due to (I think) this line:

FIND_PACKAGE(OpenCV 2.0.0 REQUIRED)

Indeed I am using Fedora 26, that ships OpenCV 3.2.0. I change the aforementioned line for:

FIND_PACKAGE(OpenCV)  # NB: commenting the whole line was not working (CMake newbie...).

and the install process went fine. The integrated tests did not fail, and I was able to test the learning on MNIST as described in the manual.

So if there was no peculiar reason to pin the version of OpenCV to 2.0.0, you might want to get rid of this pinning.

Learning issue

Hello how can I remove the following error?

prachi@prachi-V530-15ICB:~/N2D2$ ./build/bin/n2d2 ./models/mnist24_16c4s2_24c5s2_150_10.ini -learn 600000 -log 10000
Option -log: number of steps between logs [10000]
Option -learn: number of backprop learning steps [600000]
Loading network configuration file ./models/mnist24_16c4s2_24c5s2_150_10.ini
Warning: to use Frame_CUDA models, N2D2 must be compiled with CUDA enabled.
*** Using Frame model instead. ***
Notice: Unused section sp in INI file
Notice: Unused section sp.Transformation in INI file
Notice: Unused section conv1 in INI file
Notice: Unused section conv2 in INI file
Notice: Unused section fc1 in INI file
Notice: Unused section fc1.drop in INI file
Notice: Unused section fc2 in INI file
Notice: Unused section softmax in INI file
Notice: Unused section softmax.Target in INI file
Notice: Unused section common.config in INI file
Time elapsed: 0.000383066 s
Error: Could not open images file: /local/prachi/n2d2_data/mnist/train-images-idx3-ubyte

Test the CPP_cuDNN exported network error

Hello:

I want to test the exported network, but i have problem. How to solve this problem?

qwdr7777@ubuntu:~/N2D2-master/build/bin/export_CPP_cuDNN_float0$ sudo make ./n2d2_cudnn_test
[sudo] password for qwdr7777:
g++ -I./include/ -I./include/dnn/ -I./include/utils/ -std=c++0x n2d2_cudnn_test.cpp -o n2d2_cudnn_test
In file included from ./include/cpp_utils.hpp:22:0,
from ./include/n2d2_cudnn_inference.hpp:25,
from n2d2_cudnn_test.cpp:32:
./include/typedefs.h:88:21: fatal error: ap_cint.h: No such file or directory
#include <ap_cint.h>
^
compilation terminated.
make: *** [n2d2_cudnn_test] Error 1

C export in integers don't work

C export in integers has stopped working (tested with , -nbbits 8, -nbbits 16 and -nbbits 32)

$ n2d2 $N2D2_ROOT/models/LeNet.ini -export C
[...]
Exporting Test dataset to "export_C_int8/stimuli"....................
Generating C export to "export_C_int8":
-> Generating cell conv1
Time elapsed: 1.79093 s
Error: Can't export a non-integral double as integral parameter.

Step to reproduce:

n2d2 N2D2/models/LeNet.ini -learn 6000 -log 3000
n2d2 N2D2/models/LeNet.ini -export C

Unable to get graphical view (error with gnuplot commands)

Running the command:
./n2d2.sh "../../../tuto/tutorial.ini" -learn 1 -log 1

I get lot of errors with gnuplot, here is an example:

gnuplot> if (!exists("multiplot")) set term png size 800,600 enhanced large
                                                                      ^
         line 0: unrecognized terminal option

Linux distribution: Red Hat Enterprise Linux Server release 6.7 (Santiago)
Gnuplot version: gnuplot 4.6 patchlevel 5

"make error" on Compilation

Hello,
I have a problem when I do "make" on step 3 compilation. How to solve this?

esl@ubuntu:~/N2D2-master/build$ cmake .. && make
-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found Gnuplot: /usr/bin/gnuplot (found version "5.0.3")
-- No PugiXML found
-- MongoDB not found.
-- Found CUDA: /usr/local/cuda-9.0 (found version "9.0")
-- Found CuDNN: /usr/include
-- CuDNN library status:
-- version: 7.0.5
-- include path: /usr/include
-- libraries: /usr/lib/x86_64-linux-gnu/libcudnn.so
-- Configuring done
-- Generating done
-- Build files have been written to: /home/esl/N2D2-master/build
[ 0%] Building NVCC (Device) object CMakeFiles/n2d2_lib_cuda.dir/src/containers/n2d2_lib_cuda_generated_CudaTensor.cu.o
[ 0%] Building NVCC (Device) object CMakeFiles/n2d2_lib_cuda.dir/src/n2d2_lib_cuda_generated_CEnvironment_CUDA_kernels.cu.o
[ 0%] Building NVCC (Device) object CMakeFiles/n2d2_lib_cuda.dir/src/n2d2_lib_cuda_generated_CMonitor_CUDA_kernels.cu.o
[ 0%] Building NVCC (Device) object CMakeFiles/n2d2_lib_cuda.dir/src/Solver/n2d2_lib_cuda_generated_SGDSolver_CUDA_Kernels.cu.o
/usr/local/cuda-9.0/include/thrust/detail/tuple.inl(276): warning: calling a constexpr host function("half_float::half::half") from a host device function("cons") is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.
detected during:
instantiation of "thrust::detail::cons<HT, TT>::cons() [with HT=half_float::half, TT=thrust::detail::cons<signed long, thrust::null_type>]"
/usr/local/cuda-9.0/include/thrust/tuple.h(213): here
instantiation of "thrust::tuple<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9>::tuple() [with T0=half_float::half, T1=signed long, T2=thrust::null_type, T3=thrust::null_type, T4=thrust::null_type, T5=thrust::null_type, T6=thrust::null_type, T7=thrust::null_type, T8=thrust::null_type, T9=thrust::null_type]"
(276): here
instantiation of "thrust::detail::cons<HT, TT>::cons() [with HT=thrust::tuple<half_float::half, signed long, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, TT=thrust::detail::cons<thrust::tuple<half_float::half, signed long, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, thrust::null_type>]"
/usr/local/cuda-9.0/include/thrust/tuple.h(213): here
instantiation of "thrust::tuple<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9>::tuple() [with T0=thrust::tuple<half_float::half, signed long, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, T1=thrust::tuple<half_float::half, signed long, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, T2=thrust::null_type, T3=thrust::null_type, T4=thrust::null_type, T5=thrust::null_type, T6=thrust::null_type, T7=thrust::null_type, T8=thrust::null_type, T9=thrust::null_type]"
/usr/local/cuda-9.0/include/thrust/system/cuda/detail/reduce.h(393): here
instantiation of "T thrust::cuda_cub::__reduce::ReduceAgent<InputIt, OutputIt, T, Size, ReductionOp>::impl::consume_range_impl(Size, Size, CAN_VECTORIZE) [with InputIt=thrust::cuda_cub::transform_input_iterator_t<thrust::tuple<thrust::tuple<half_float::half, signed long, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, thrust::tuple<half_float::half, signed long, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, thrust::zip_iterator<thrust::tuple<thrust::device_ptr<half_float::half>, thrust::cuda_cub::counting_iterator_t, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>>, thrust::cuda_cub::__extrema::arg_minmax_f<half_float::half, signed long, thrust::less<half_float::half>>::duplicate_tuple>, OutputIt=thrust::tuple<thrust::tuple<half_float::half, signed long, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, thrust::tuple<half_float::half, signed long, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type> *, T=thrust::tuple<thrust::tuple<half_float::half, signed long, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, thrust::tuple<half_float::half, signed long, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, Size=signed long, ReductionOp=thrust::cuda_cub::__extrema::arg_minmax_f<half_float::half, signed long, thrust::less<half_float::half>>, CAN_VECTORIZE=thrust::cuda_cub::__reduce::is_true]"
/usr/local/cuda-9.0/include/thrust/system/cuda/detail/reduce.h(454): here
[ 6 instantiation contexts not shown ]
instantiation of "T thrust::cuda_cub::__extrema::extrema(thrust::cuda_cub::execution_policy &, InputIt, Size, BinaryOp, T *) [with Derived=thrust::cuda_cub::tag, InputIt=thrust::cuda_cub::transform_input_iterator_t<thrust::tuple<thrust::tuple<half_float::half, signed long, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, thrust::tuple<half_float::half, signed long, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, thrust::zip_iterator<thrust::tuple<thrust::device_ptr<half_float::half>, thrust::cuda_cub::counting_iterator_t, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>>, thrust::cuda_cub::__extrema::arg_minmax_f<half_float::half, signed long, thrust::less<half_float::half>>::duplicate_tuple>, Size=signed long, BinaryOp=thrust::cuda_cub::__extrema::arg_minmax_f<half_float::half, signed long, thrust::less<half_float::half>>, T=thrust::tuple<thrust::tuple<half_float::half, signed long, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, thrust::tuple<half_float::half, signed long, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>]"
/usr/local/cuda-9.0/include/thrust/system/cuda/detail/extrema.h(548): here
instantiation of "thrust::pair<ItemsIt, ItemsIt> thrust::cuda_cub::minmax_element(thrust::cuda_cub::execution_policy &, ItemsIt, ItemsIt, BinaryPred) [with Derived=thrust::cuda_cub::tag, ItemsIt=thrust::device_ptr<half_float::half>, BinaryPred=thrust::less<half_float::half>]"
/usr/local/cuda-9.0/include/thrust/system/cuda/detail/extrema.h(572): here
instantiation of "thrust::pair<ItemsIt, ItemsIt> thrust::cuda_cub::minmax_element(thrust::cuda_cub::execution_policy &, ItemsIt, ItemsIt) [with Derived=thrust::cuda_cub::tag, ItemsIt=thrust::device_ptr<half_float::half>]"
/usr/local/cuda-9.0/include/thrust/detail/extrema.inl(75): here
instantiation of "thrust::pair<ForwardIterator, ForwardIterator> thrust::minmax_element(const thrust::detail::execution_policy_base &, ForwardIterator, ForwardIterator) [with DerivedPolicy=thrust::cuda_cub::tag, ForwardIterator=thrust::device_ptr<half_float::half>]"
/usr/local/cuda-9.0/include/thrust/detail/extrema.inl(153): here
instantiation of "thrust::pair<ForwardIterator, ForwardIterator> thrust::minmax_element(ForwardIterator, ForwardIterator) [with ForwardIterator=thrust::device_ptr<half_float::half>]"
/home/esl/N2D2-master/src/Solver/SGDSolver_CUDA_Kernels.cu(394): here

/usr/local/cuda-9.0/include/thrust/functional.h(611): warning: calling a host function("half_float::detail::operator << ::half_float::half, ::half_float::half> ") from a host device function("thrust::less< ::half_float::half> ::operator () const") is not allowed

/usr/local/cuda-9.0/include/thrust/system/cuda/detail/extrema.h(61): error: calling a host function("half_float::detail::operator << ::half_float::half, ::half_float::half> ") from a device function("thrust::cuda_cub::__extrema::arg_min_f< ::half_float::half, long, ::thrust::less< ::half_float::half> > ::operator ()") is not allowed

/usr/local/cuda-9.0/include/thrust/system/cuda/detail/extrema.h(61): error: identifier "half_float::detail::operator << ::half_float::half, ::half_float::half> " is undefined in device code

/usr/local/cuda-9.0/include/thrust/system/cuda/detail/extrema.h(63): error: calling a host function("half_float::detail::operator << ::half_float::half, ::half_float::half> ") from a device function("thrust::cuda_cub::__extrema::arg_min_f< ::half_float::half, long, ::thrust::less< ::half_float::half> > ::operator ()") is not allowed

/usr/local/cuda-9.0/include/thrust/system/cuda/detail/extrema.h(63): error: identifier "half_float::detail::operator << ::half_float::half, ::half_float::half> " is undefined in device code

/usr/local/cuda-9.0/include/thrust/system/cuda/detail/extrema.h(92): error: calling a host function("half_float::detail::operator << ::half_float::half, ::half_float::half> ") from a device function("thrust::cuda_cub::__extrema::arg_max_f< ::half_float::half, long, ::thrust::less< ::half_float::half> > ::operator ()") is not allowed

/usr/local/cuda-9.0/include/thrust/system/cuda/detail/extrema.h(92): error: identifier "half_float::detail::operator << ::half_float::half, ::half_float::half> " is undefined in device code

/usr/local/cuda-9.0/include/thrust/system/cuda/detail/extrema.h(94): error: calling a host function("half_float::detail::operator << ::half_float::half, ::half_float::half> ") from a device function("thrust::cuda_cub::__extrema::arg_max_f< ::half_float::half, long, ::thrust::less< ::half_float::half> > ::operator ()") is not allowed

/usr/local/cuda-9.0/include/thrust/system/cuda/detail/extrema.h(94): error: identifier "half_float::detail::operator << ::half_float::half, ::half_float::half> " is undefined in device code

8 errors detected in the compilation of "/tmp/tmpxft_00000d89_00000000-6_SGDSolver_CUDA_Kernels.cpp1.ii".
CMake Error at n2d2_lib_cuda_generated_SGDSolver_CUDA_Kernels.cu.o.cmake:262 (message):
Error generating file
/home/esl/N2D2-master/build/CMakeFiles/n2d2_lib_cuda.dir/src/Solver/./n2d2_lib_cuda_generated_SGDSolver_CUDA_Kernels.cu.o

CMakeFiles/n2d2_lib_cuda.dir/build.make:77: recipe for target 'CMakeFiles/n2d2_lib_cuda.dir/src/Solver/n2d2_lib_cuda_generated_SGDSolver_CUDA_Kernels.cu.o' failed
make[2]: *** [CMakeFiles/n2d2_lib_cuda.dir/src/Solver/n2d2_lib_cuda_generated_SGDSolver_CUDA_Kernels.cu.o] Error 1
CMakeFiles/Makefile2:1920: recipe for target 'CMakeFiles/n2d2_lib_cuda.dir/all' failed
make[1]: *** [CMakeFiles/n2d2_lib_cuda.dir/all] Error 2
Makefile:94: recipe for target 'all' failed
make: *** [all] Error 2

I use Ubuntu 16.04, and install cuda-core-9-0, cuda-cudart-dev-9-0, cuda-cublas-dev-9-0, cuda-curand-dev-9-0, libcudnn7-dev.
The CUDA library path is
export PATH="/usr/local/cuda-9.0/bin${PATH:+:${PATH}}"
export LD_LIBRARY_PATH="/usr/local/cuda-9.0/lib64$LD_LIBRARY_PATH"

[Windows] pgnuplot.exe not recognized

On Windows, when I run the learning on mnist:
./n2d2 "mnist24_16c4s2_24c5s2_150_10.ini" -learn 600000 -log 10000
I get many non-fatal errors in console output:
'pgnuplot.exe' is not recognized as an internal or external command

Indeed the folder gnuplot/bin contains wgnuplot.exe and gnuplot.exe but no pgnuplot.exe.

Can you fix it in file Gnuplot.cpp ?

#ifdef WIN32
        mCmdPipe = _popen("pgnuplot.exe", "w");
#else
        mCmdPipe = popen("gnuplot", "w");
#endif

How to use a dataset not predefined in N2D2?

Hello,
For our project, I want to import my own dataset to N2D2.
It could be both images dataset or spike dataset from event camera like files .raw, .aedat, .dat, etc.
Do you have any tutorial for that function.
Thanks

Downloading MNIST dataset

Hello,
Can someone please help me i solving following issue?

prachi@prachi-V530-15ICB:~/N2D2/tools$ python install_stimuli_gui.py
Installation path of the stimuli [default is /local/prachi/n2d2_data]:
http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
mnist
Traceback (most recent call last):
File "install_stimuli_gui.py", line 127, in load_database
os.makedirs(target)
File "/usr/lib/python2.7/os.py", line 150, in makedirs
makedirs(head, mode)
File "/usr/lib/python2.7/os.py", line 150, in makedirs
makedirs(head, mode)
File "/usr/lib/python2.7/os.py", line 150, in makedirs
makedirs(head, mode)
File "/usr/lib/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/local'

Couldn't open connection to gnuplot

Hello everyone, I'm currently facing an issue that I don't understand so much.

Issue

I'm trying to train a ResNet with N2D2 and I have the following error :
Couldn't open connection to gnuplot (is it in the PATH?)

Debug

Then I try to understand why this error is raised :

libstdc++.so.6!__cxxabiv1::__cxa_throw(void * obj, std::type_info * tinfo, void (*)(void *) dest) (\usr\src\debug\gcc-4.8.5-20150702\libstdc++-v3\libsupc++\eh_throw.cc:62)
N2D2::Gnuplot::Gnuplot(N2D2::Gnuplot * const this, const std::string & fileName) (\home\thales\N2D2\src\utils\Gnuplot.cpp:44)
N2D2::PoolCell::writeMap(const N2D2::PoolCell * const this, const std::string & fileName) (\home\thales\N2D2\src\Cell\PoolCell.cpp:95)
N2D2::PoolCellGenerator::generate(N2D2::Network & network, const N2D2::DeepNet & deepNet, N2D2::StimuliProvider & sp, const std::vector<std::shared_ptr<N2D2::Cell>, std::allocator<std::shared_ptr<N2D2::Cell> > > & parents, N2D2::IniParser & iniConfig, const std::string & section) (\home\thales\N2D2\src\Generator\PoolCellGenerator.cpp:211)
std::_Function_handler<std::shared_ptr<N2D2::Cell> (N2D2::Network&, N2D2::DeepNet const&, N2D2::StimuliProvider&, std::vector<std::shared_ptr<N2D2::Cell>, std::allocator<std::shared_ptr<N2D2::Cell> > > const&, N2D2::IniParser&, std::string const&), std::shared_ptr<N2D2::PoolCell> (*)(N2D2::Network&, N2D2::DeepNet const&, N2D2::StimuliProvider&, std::vector<std::shared_ptr<N2D2::Cell>, std::allocator<std::shared_ptr<N2D2::Cell> > > const&, N2D2::IniParser&, std::string const&)>::_M_invoke(std::_Any_data const&, N2D2::Network&, N2D2::DeepNet const&, N2D2::StimuliProvider&, std::vector<std::shared_ptr<N2D2::Cell>, std::allocator<std::shared_ptr<N2D2::Cell> > > const&, N2D2::IniParser&, std::string const&)(const std::_Any_data & __functor,  __args#0,  __args#1,  __args#2,  __args#3,  __args#4,  __args#5) (\usr\include\c++\4.8.2\functional:2057)
std::function<std::shared_ptr<N2D2::Cell> (N2D2::Network&, N2D2::DeepNet const&, N2D2::StimuliProvider&, std::vector<std::shared_ptr<N2D2::Cell>, std::allocator<std::shared_ptr<N2D2::Cell> > > const&, N2D2::IniParser&, std::string const&)>::operator()(N2D2::Network&, N2D2::DeepNet const&, N2D2::StimuliProvider&, std::vector<std::shared_ptr<N2D2::Cell>, std::allocator<std::shared_ptr<N2D2::Cell> > > const&, N2D2::IniParser&, std::string const&) const(const std::function<std::shared_ptr<N2D2::Cell>(N2D2::Network&, const N2D2::DeepNet&, N2D2::StimuliProvider&, const std::vector<std::shared_ptr<N2D2::Cell>, std::allocator<std::shared_ptr<N2D2::Cell> > >&, N2D2::IniParser&, const std::basic_string<char, std::char_traits<char>, std::allocator<char> >&)> * const this,  __args#0,  __args#1,  __args#2,  __args#3,  __args#4,  __args#5) (\usr\include\c++\4.8.2\functional:2471)
N2D2::CellGenerator::generate(N2D2::Network & network, const N2D2::DeepNet & deepNet, N2D2::StimuliProvider & sp, const std::vector<std::shared_ptr<N2D2::Cell>, std::allocator<std::shared_ptr<N2D2::Cell> > > & parents, N2D2::IniParser & iniConfig, const std::string & section) (\home\thales\N2D2\src\Generator\CellGenerator.cpp:45)
N2D2::DeepNetGenerator::generate(N2D2::Network & network, const std::string & fileName) (\home\thales\N2D2\src\Generator\DeepNetGenerator.cpp:249)
main(int argc, char ** argv) (\home\thales\N2D2\exec\n2d2.cpp:1739)

It seems that the construction of a N2D2::GnuPlot::GnuPlot object causes the error because the popen function returns NULL value.

Therefore, I print the strerror(errno) resulting in this issue and I have the following message : Cannot allocate memory

After some researches it seems that the fork function called by the popen is the only one that can generate this kind of error, so it make me think that it's maybe a configuration issue.

Configuration

I'm currently working into a Centos7 VM with the following configuration :

  • RAM : 8192Mo
  • Processors : 4 of Intel(R) Core(TM) i7-8650 CPU 1.9GHz 2.11 GHz
  • Hard Drive : 100Go

Conclusions

This error seems to be a configuration error because the training of mnist24_16c4s2_24c5s2_150_10 model works perfectly fine.

On the ResNet models (18 and 50) error are thrown on the Utils::exec function where popen returns NULL too, so it comforts me in the idea that it's a configuration problem.

Here is an archive with the ini file I used : ResNet-mini.zip

Did you face this issue before ?

Erreur "Run the learning"

Bonjour,
pourriez vous m'aider à résoudre cette erreur du Run the learning

~/Desktop/N2D2-master/bin/exec$ ./n2d2 "mnist24_16c4s2_24c5s2_150_10.ini" -learn 600000 -log 10000Option -log: number of steps between logs [10000]
Option -learn: number of backprop learning steps [600000]
Loading network configuration file mnist24_16c4s2_24c5s2_150_10.ini
Notice: Unused section in INI file
Time elapsed: 0.000160913 s
Error: Could not open INI file: mnist24_16c4s2_24c5s2_150_10.in

Blocage lors du lancement de la commande du RUN the learning

Bonjour
lorsque je lance cette commande
./n2d2 "mnist24_16c4s2_24c5s2_150_10.ini" -learn 600000 -log 10000
un message d'erreur concernant CUDA s'affiche comme le suivant :
Option -log: number of steps between logs [10000]
Option -learn: number of backprop learning steps [600000]
Cuda failure: CUDA driver version is insufficient for CUDA runtime version (35) in /home/ahlem/Téléchargements/N2D2-master/include/CudaContext.hpp:34
Error: Cuda failure: CUDA driver version is insufficient for CUDA runtime version (35) in /home/ahlem/Téléchargements/N2D2-master/include/CudaContext.hpp:34

Extract the output of a DNN as a binary file, when fed with a single frame

Hello,

Assuming a functional DNN written as a .ini file, and a training already done, I would like to be able to extract the output (the Target) of a particular frame.

The -test-idx options allows to test the neural network on specific frames, which creates a folder frames/ containing images with values in {-1, 0, 1}. What I would like to get instead, is the actual output that says e.g. 57% chance that's it's a car, 18% chance that it's a horse, etc.., the binary content with one float value per possible label.

Is there a possibility to do that within the current framework?

I also tried messing with the -export functions, but unfortunately it generates no Makefile to compile the generated files, it seems that either something is missing or i am missing something. I tried with the CPP export, since I would like to run it on my CPU.

Thanks for your answers

N2D2 versions conflict

Il y a une anomalie avec N2D2 car j'obtiens des résultats différents avec deux versions de N2D2.

Version actuelle "21/03/2018": accuracy = 77.92 %
Version antérieure "16/03/2017": accuracy = 95.37 %

Ce problème est présent seulement pour les réseaux à spikes.

Je utilise la commande "n2d2 mnist28_300_10_Spike.ini -test" pour les deux versions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.