Comments (23)
Absolutely. I submitted a test to swarm2 yesterday to test whenever it's working or not. I should be making a pull request tonight most likely.
from bindsnet.
Awesome, let me know whether you need help with this.
from bindsnet.
Hey @hqkhan,
Any movement on this? Let me know if you need any help.
from bindsnet.
Hey. I'm still stuck at the bug that I was previously mentioning with the spiking activity suddenly being nothing. I tried making some counter-measures but for some reason it still ends up crashing.
Also, there's this weird thing where the ngram accuracy currently is always 100%. I tried checking predictions vs truth but they're actually the exact same. So, I'm confused.
Haven't been able to work on this for a couple of days though.
from bindsnet.
Okay, I'll try my best to look into it.
I think the n-gram accuracy is 100% because you're testing on the n-gram training data; i.e., you're updating the n-grams with the same data you're classifying. I could be wrong.
from bindsnet.
I rearranged the ngram_scores
update to after the ngram_pred = ...
call, which fixed the 100% accuracy all the time as expected. However, I'm running into a problem with the ngram scores calculation:
I'm looking at update_ngram_scores
, and it seems like you're keep around a dictionary mapping from 2-tuples of time points to a vector of label counts.
I don't think this is what we want, right? We want to map tuples of neuron IDs to vectors of counts, I believe. I'm trying to rewrite the function to do this.
from bindsnet.
These two lines in ngram()
cause the crash you mentioned earlier. Just remove these, and I think it won't affect the logic.
from bindsnet.
By the way, this method doesn't exactly implement n-gram; it implements a 2-gram with variable spacing between the two considered neurons indices. For example, if you pass n=3
, you'll record all pairs of spikes separated by a single intermediate spike; if you pass n=4
, you'll record all pairs of spikes separated by two intermediate spikes, and so on.
Does this make sense?
from bindsnet.
I see. I was hoping to speed through the computation by removing timesteps with no firing at all. I'll try removing it when I'm back.
Yes! I know exactly what you mean because it was one of the things I mentioned to Hananel. The main complaint I had was with the "ngram" method itself. If n=5
then we create a window of sizes n=2, 3, 4, 5
and slide each one across our "firing order" for that timestep of an example. But in the "ngram" method, we ONLY use the largest n
size to evaluate. I looked at the lmm-snn repo to check what was implemented there and although we record all the tuples of different sizes up to the maximum n
size, we evaluate ONLY for the largest n
. I think there should be another outer loop which should go over the different window sizes same as update_ngram_scores
.
from bindsnet.
Hm, I wasn't involved with the initial development of ngram, so I didn't know that.
No need to fix it, I'm doing it myself (and have already a pull request to the BindsNET repo).
from bindsnet.
We'll have to fix the ngram implementation in the future!
from bindsnet.
Yes, I agree. Have you tried running it? Does it work?
from bindsnet.
Yup, it does! I just fixed up the diehl_and_cook_2015.py
script to use the 2-gram evaluation scheme (2-gram is all that works for now!).
from bindsnet.
By the way, I accidentally pushed to your branch. Sorry about that...
from bindsnet.
Awesome! Thanks for that. Just to check, it crashes at 5k training data, not instantly. I'm assuming you ran it for a while to check?
No worries about that. I'll pull. Thanks for letting me know.
from bindsnet.
Well, it can crash at any point. It depends on essentially all hyperparameters. I didn't run until 5k to check, because I was pretty certain what was causing the error.
from bindsnet.
Gotcha. True that it can crash at any point but for the default set of hyperparameters, it was crashing at 5k. No worries though, I'm sure you got it.
from bindsnet.
Last time I check, you dont need to run until 5k to check. Instead you can train on 500 and then run the test. It should crash on the 1050~ish image
from bindsnet.
I believe the issue is fixed now
from bindsnet.
About 2-gram evaluation, Darpan found that 2 was the best score. We need to check if its the case today
from bindsnet.
We need to first implement general n-gram. Darpan actually only implemented 2-gram, according to @hqkhan's comments above.
from bindsnet.
By the way, this method doesn't exactly implement n-gram; it implements a 2-gram with variable spacing between the two considered neurons indices. For example, if you pass n=3, you'll record all pairs of spikes separated by a single intermediate spike; if you pass n=4, you'll record all pairs of spikes separated by two intermediate spikes, and so on.
Does this make sense?
No.
Is it the same implementation from the old LM-SNN?
from bindsnet.
Yes. What I'm trying to say is, the old n-gram implementation is incorrect.
If the above doesn't make sense, try reading it again. @hqkhan saw the same problem.
from bindsnet.
Related Issues (20)
- A (serious) bug preventing RL algorithms to work HOT 4
- Has anyone manage to make one of the RL examples to work? HOT 2
- Saving, loading, and performing prediction from supervised examples HOT 1
- Is there any way to use BindsNet on RTX 3090? HOT 1
- 'bindsnet' is not recognized as an internal or external command, operable program or batch file. HOT 2
- How would I run this type of setup? HOT 2
- Reservoir issues!
- Network converted from ANN doesn't retain weights after training? HOT 3
- Question: Can it be used for speech emotion recognition task? HOT 1
- Does the `poisson` function under-produce spikes? HOT 13
- THE DEAD NEURON PROBLEM HOT 14
- ModuleNotFoundError: No module named 'torch._six' HOT 5
- Is SingleEncoder timing-based? HOT 7
- Are learning rules such as gradient descent available? HOT 1
- ModuleNotFoundError: No module named 'torch._six' HOT 4
- Columns and DataType Not Explicitly Set on line 18 of plot_benchmark.py
- How backpropagation work? HOT 6
- Using bindsnet for temperature prediction? HOT 1
- bug in examples/breakout HOT 2
- Regarding Inhibitory Neurons and Excitatory Neurons Under Dale's Rule HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bindsnet.