GithubHelp home page GithubHelp logo

bindsnet / bindsnet Goto Github PK

View Code? Open in Web Editor NEW
1.4K 1.4K 333.0 30.41 MB

Simulation of spiking neural networks (SNNs) using PyTorch.

License: GNU Affero General Public License v3.0

Python 99.58% Dockerfile 0.42%
dynamic gpu-computing machine-learning neurons pytorch reinforcement-learning simulation snn spiking-neural-networks stdp synapse

bindsnet's Introduction

A Python package used for simulating spiking neural networks (SNNs) on CPUs or GPUs using PyTorch Tensor functionality.

BindsNET is a spiking neural network simulation library geared towards the development of biologically inspired algorithms for machine learning.

This package is used as part of ongoing research on applying SNNs to machine learning (ML) and reinforcement learning (RL) problems in the Biologically Inspired Neural & Dynamical Systems (BINDS) lab.

Check out the BindsNET examples for a collection of experiments, functions for the analysis of results, plots of experiment outcomes, and more. Documentation for the package can be found here.

Build Status Documentation Status Gitter chat

Requirements

  • Python >=3.8.10,<3.11

Setting things up

Using Pip

To install the most recent stable release from the GitHub repository

pip install git+https://github.com/BindsNET/bindsnet.git

Or, to build the bindsnet package from source, clone the GitHub repository, change directory to the top level of this project, and issue

pip install .

Or, to install in editable mode (allows modification of package without re-installing):

pip install -e .

To install the packages necessary to interface with the OpenAI gym RL environments library, follow their instructions for installing the packages needed to run the RL environments simulator (on Linux / MacOS).

Using Docker

Link to Docker repository.

We also provide a Dockerfile in which BindsNET and all of its dependencies come installed in. Issue

docker build .

at the top level directory of this project to create a docker image.

To change the name of the newly built image, issue

docker tag <IMAGE_ID> <NEW_IMAGE_ID>

To run a container and get a bash terminal inside it, issue

docker run -it <NEW_IMAGE_ID> bash

Getting started

To run a near-replication of the SNN from this paper, issue

cd examples/mnist
python eth_mnist.py

There are a number of optional command-line arguments which can be passed in, including --plot (displays useful monitoring figures), --n_neurons [int] (number of excitatory, inhibitory neurons simulated), --mode ['train' | 'test'] (sets network operation to the training or testing phase), and more. Run the script with the --help or -h flag for more information.

A number of other examples are available in the examples directory that are meant to showcase BindsNET's functionality. Take a look, and let us know what you think!

Running the tests

Issue the following to run the tests:

python -m pytest test/

Some tests will fail if Open AI gym is not installed on your machine.

Background

The simulation of biologically plausible spiking neuron dynamics can be challenging. It is typically done by solving ordinary differential equations (ODEs) which describe said dynamics. PyTorch does not explicitly support the solution of differential equations (as opposed to brian2, for example), but we can convert the ODEs defining the dynamics into difference equations and solve them at regular, short intervals (a dt on the order of 1 millisecond) as an approximation. Of course, under the hood, packages like brian2 are doing the same thing. Doing this in PyTorch is exciting for a few reasons:

  1. We can use the powerful and flexible torch.Tensor object, a wrapper around the numpy.ndarray which can be transferred to and from GPU devices.

  2. We can avoid "reinventing the wheel" by repurposing functions from the torch.nn.functional PyTorch submodule in our SNN architectures; e.g., convolution or pooling functions.

The concept that the neuron spike ordering and their relative timing encode information is a central theme in neuroscience. Markram et al. (1997) proposed that synapses between neurons should strengthen or degrade based on this relative timing, and prior to that, Donald Hebb proposed the theory of Hebbian learning, often simply stated as "Neurons that fire together, wire together." Markram et al.'s extension of the Hebbian theory is known as spike-timing-dependent plasticity (STDP).

We are interested in applying SNNs to ML and RL problems. We use STDP to modify weights of synapses connecting pairs or populations of neurons in SNNs. In the context of ML, we want to learn a setting of synapse weights which will generate data-dependent spiking activity in SNNs. This activity will allow us to subsequently perform some ML task of interest; e.g., discriminating or clustering input data. In the context of RL, we may think of the spiking neural network as an RL agent, whose spiking activity may be converted into actions in an environment's action space.

We have provided some simple starter scripts for doing unsupervised learning (learning a fully-connected or convolutional representation via STDP), supervised learning (clamping output neurons to desired spiking behavior depending on data labels), and reinforcement learning (converting observations from the Atari game Space Invaders to input to an SNN, and converting network activity back to actions in the game).

Benchmarking

We simulated a network with a population of n Poisson input neurons with firing rates (in Hertz) drawn randomly from U(0, 100), connected all-to-all with a equally-sized population of leaky integrate-and-fire (LIF) neurons, with connection weights sampled from N(0,1). We varied n systematically from 250 to 10,000 in steps of 250, and ran each simulation with every library for 1,000ms with a time resolution dt = 1.0. We tested BindsNET (with CPU and GPU computation), BRIAN2, PyNEST (the Python interface to the NEST SLI interface that runs the C++NEST core simulator), ANNarchy (with CPU and GPU computation), and BRIAN2genn (the BRIAN2 front-end to the GeNN simulator).

Several packages, including BRIAN and PyNEST, allow the setting of certain global preferences; e.g., the number of CPU threads, the number of OpenMP processes, etc. We chose these settings for our benchmark study in an attempt to maximize each library's speed, but note that BindsNET requires no setting of such options. Our approach, inheriting the computational model of PyTorch, appears to make the best use of the available hardware, and therefore makes it simple for practitioners to get the best performance from their system with the least effort.

BindsNET%20Benchmark

All simulations run on Ubuntu 16.04 LTS with Intel(R) Xeon(R) CPU E5-2687W v3 @ 3.10GHz, 128Gb RAM @ 2133MHz, and two GeForce GTX TITAN X (GM200) GPUs. Python 3.6 is used in all cases. Clock time was recorded for each simulation run.

Citation

If you use BindsNET in your research, please cite the following article:

@ARTICLE{10.3389/fninf.2018.00089,
	AUTHOR={Hazan, Hananel and Saunders, Daniel J. and Khan, Hassaan and Patel, Devdhar and Sanghavi, Darpan T. and Siegelmann, Hava T. and Kozma, Robert},   
	TITLE={BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python},      
	JOURNAL={Frontiers in Neuroinformatics},      
	VOLUME={12},      
	PAGES={89},     
	YEAR={2018}, 
	URL={https://www.frontiersin.org/article/10.3389/fninf.2018.00089},       
	DOI={10.3389/fninf.2018.00089},      
	ISSN={1662-5196},
}

Contributors

License

GNU Affero General Public License v3.0

bindsnet's People

Contributors

amolk avatar axyzdong avatar bmosk54 avatar cearlumass avatar chaterate avatar coopersigrist avatar danielgafni avatar dee0512 avatar dependabot[bot] avatar djsaunde avatar elanning avatar farukhs52 avatar glynnchaldecott avatar hafezgh avatar hananel-hazan avatar huizerd avatar k-chaney avatar kamue1a avatar kingjr avatar mahbodnr avatar maldil avatar petermarathas avatar ruteee avatar sharath avatar simoninparis avatar thonner avatar tomking avatar valeriob88 avatar weihaotan avatar williamyao66 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bindsnet's Issues

Bug when wmin or wmx is none

this

# Ensure weights are correctly clamped
            assert ((self.connections[c].w <= self.connections[c].wmax).all() and (self.connections[c].w` >= self.connections[c].wmin).all())

Please remove this. There is no need for this check. It is also adding computation every single timestep.

Bindsnet as Conda package

It would be nice to have Bindsnet as Conda package in some channel. Currently I have PyTorch installation with Cond, but pip installation of Bindsnet is giving:

Could not find a version that satisfies the requirement torch>=1.0.0 (from bindsnet) (from versions: 0.1.2, 0.1.2.post1)
No matching distribution found for torch>=1.0.0 (from bindsnet)

So - pip is not seeing packages installed by Conda.

I tried:
conda install --channel "pypi" bindsnet
but it said, that the package can not be found

PyTorch implementation of (homogeneous / inhomogeneous) Poisson encoding

I think that bindsnet.encoding.poisson could easily be converted to use torch.distributions.Poisson. It would be a good way for us to reduce reliance on numpy, and perhaps improve readibility of the code.

Consider the poisson function:

poisson(datum: torch.Tensor, time: int, **kwargs) -> torch.Tensor:
    # language=rst
    """
    Generates Poisson-distributed spike trains based on input intensity. Inputs must be non-negative.

    :param datum: Tensor of shape ``[n_1, ..., n_k]``.
    :param time: Length of Bernoulli spike train per input variable.
    :return: Tensor of shape ``[time, n_1, ..., n_k]`` of Poisson-distributed spikes.
    """
    datum = np.copy(datum)
    shape, size = datum.shape, datum.size
    datum = datum.ravel()

    # Invert inputs (firing rate inverse of inter-arrival time).
    datum[datum != 0] = 1 / datum[datum != 0] * 1000

    # Make spike data from Poisson sampling.
    s_times = np.random.poisson(datum, [time, size])
    s_times = np.cumsum(s_times, axis=0)
    s_times[s_times >= time] = 0

    # Create spike trains from spike times.
    s = np.zeros([time, size])
    for i in range(time):
        s[s_times[i], np.arange(size)] = 1

    s[0, :] = 0
    s = s.reshape([time, *shape])

    return torch.Tensor(s).byte()

There are a few things that are missing:

  1. No concept of a time step. If we assume the data are coded as rates, with Hz units (which perhaps we should, and which is implicitly assumed in the current poisson implementation), then global time step should have an effect on interspike intervals.
  2. No concept of time-varying input. @dsanghavi worked on a sort of overlapping Poisson encoding function, but we really need something that implements inhomogeneous Poisson processes. This would allow us to naturally encode time-varying input. This might implemented in a separate function (say, inhomogeneous_poisson), or by means of a boolean argument to poisson which signals that, say, the first dimension of the input is the time dimension. I'm leaning towards the latter.

I'd allow like to include this in a PoissonInput(Nodes) object, which would accept a rates parameter specifying the per-neuron firing rate (parametrizing their Poisson spike trains). This would optionally be time-varying. On each step of a PoissonInput instance, it would generate its output using the poisson function and its rates attribute.

Q: Structured input in Poisson?

Thanks for this fantastic package.

It seems that above certain value, the poisson encoder stops generating spikes. Is this expected because firing frequency is above sampling rate? I'm puzzled by the fact that top neurons keep firing, although it should be random.

from bindsnet.encoding import poisson
import matplotlib.pyplot as plt
import torch

plt.matshow(poisson(torch.tensor(range(100)).float()*100., 200))
plt.show()

image

Problem while running python setup.py install

Hi there
On windows 10, using Python 3.6.6 :: Anaconda custom (64-bit), I get the complaints below while installing bindsnet. Any idea ?

`D:\EEG_Signals\bindsnet-master>python setup.py install
D:\Anaconda3\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type'
warnings.warn(msg)
running install
running bdist_egg
running egg_info
creating bindsnet.egg-info
writing bindsnet.egg-info\PKG-INFO
writing dependency_links to bindsnet.egg-info\dependency_links.txt
writing requirements to bindsnet.egg-info\requires.txt
writing top-level names to bindsnet.egg-info\top_level.txt
writing manifest file 'bindsnet.egg-info\SOURCES.txt'
reading manifest file 'bindsnet.egg-info\SOURCES.txt'
writing manifest file 'bindsnet.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_py
creating build
creating build\lib
creating build\lib\bindsnet
copying bindsnet\utils.py -> build\lib\bindsnet
copying bindsnet_init_.py -> build\lib\bindsnet
creating build\lib\bindsnet\analysis
copying bindsnet\analysis\plotting.py -> build\lib\bindsnet\analysis
copying bindsnet\analysis\visualization.py -> build\lib\bindsnet\analysis
copying bindsnet\analysis_init_.py -> build\lib\bindsnet\analysis
creating build\lib\bindsnet\conversion
copying bindsnet\conversion_init_.py -> build\lib\bindsnet\conversion
creating build\lib\bindsnet\datasets
copying bindsnet\datasets\preprocess.py -> build\lib\bindsnet\datasets
copying bindsnet\datasets_init_.py -> build\lib\bindsnet\datasets
creating build\lib\bindsnet\encoding
copying bindsnet\encoding_init_.py -> build\lib\bindsnet\encoding
creating build\lib\bindsnet\environment
copying bindsnet\environment_init_.py -> build\lib\bindsnet\environment
creating build\lib\bindsnet\evaluation
copying bindsnet\evaluation_init_.py -> build\lib\bindsnet\evaluation
creating build\lib\bindsnet\learning
copying bindsnet\learning_init_.py -> build\lib\bindsnet\learning
creating build\lib\bindsnet\models
copying bindsnet\models_init_.py -> build\lib\bindsnet\models
creating build\lib\bindsnet\network
copying bindsnet\network\monitors.py -> build\lib\bindsnet\network
copying bindsnet\network\nodes.py -> build\lib\bindsnet\network
copying bindsnet\network\topology.py -> build\lib\bindsnet\network
copying bindsnet\network_init_.py -> build\lib\bindsnet\network
creating build\lib\bindsnet\pipeline
copying bindsnet\pipeline\action.py -> build\lib\bindsnet\pipeline
copying bindsnet\pipeline_init_.py -> build\lib\bindsnet\pipeline
creating build\lib\bindsnet\preprocessing
copying bindsnet\preprocessing_init_.py -> build\lib\bindsnet\preprocessing
creating build\bdist.win-amd64
creating build\bdist.win-amd64\egg
creating build\bdist.win-amd64\egg\bindsnet
creating build\bdist.win-amd64\egg\bindsnet\analysis
copying build\lib\bindsnet\analysis\plotting.py -> build\bdist.win-amd64\egg\bindsnet\analysis
copying build\lib\bindsnet\analysis\visualization.py -> build\bdist.win-amd64\egg\bindsnet\analysis
copying build\lib\bindsnet\analysis_init_.py -> build\bdist.win-amd64\egg\bindsnet\analysis
creating build\bdist.win-amd64\egg\bindsnet\conversion
copying build\lib\bindsnet\conversion_init_.py -> build\bdist.win-amd64\egg\bindsnet\conversion
creating build\bdist.win-amd64\egg\bindsnet\datasets
copying build\lib\bindsnet\datasets\preprocess.py -> build\bdist.win-amd64\egg\bindsnet\datasets
copying build\lib\bindsnet\datasets_init_.py -> build\bdist.win-amd64\egg\bindsnet\datasets
creating build\bdist.win-amd64\egg\bindsnet\encoding
copying build\lib\bindsnet\encoding_init_.py -> build\bdist.win-amd64\egg\bindsnet\encoding
creating build\bdist.win-amd64\egg\bindsnet\environment
copying build\lib\bindsnet\environment_init_.py -> build\bdist.win-amd64\egg\bindsnet\environment
creating build\bdist.win-amd64\egg\bindsnet\evaluation
copying build\lib\bindsnet\evaluation_init_.py -> build\bdist.win-amd64\egg\bindsnet\evaluation
creating build\bdist.win-amd64\egg\bindsnet\learning
copying build\lib\bindsnet\learning_init_.py -> build\bdist.win-amd64\egg\bindsnet\learning
creating build\bdist.win-amd64\egg\bindsnet\models
copying build\lib\bindsnet\models_init_.py -> build\bdist.win-amd64\egg\bindsnet\models
creating build\bdist.win-amd64\egg\bindsnet\network
copying build\lib\bindsnet\network\monitors.py -> build\bdist.win-amd64\egg\bindsnet\network
copying build\lib\bindsnet\network\nodes.py -> build\bdist.win-amd64\egg\bindsnet\network
copying build\lib\bindsnet\network\topology.py -> build\bdist.win-amd64\egg\bindsnet\network
copying build\lib\bindsnet\network_init_.py -> build\bdist.win-amd64\egg\bindsnet\network
creating build\bdist.win-amd64\egg\bindsnet\pipeline
copying build\lib\bindsnet\pipeline\action.py -> build\bdist.win-amd64\egg\bindsnet\pipeline
copying build\lib\bindsnet\pipeline_init_.py -> build\bdist.win-amd64\egg\bindsnet\pipeline
creating build\bdist.win-amd64\egg\bindsnet\preprocessing
copying build\lib\bindsnet\preprocessing_init_.py -> build\bdist.win-amd64\egg\bindsnet\preprocessing
copying build\lib\bindsnet\utils.py -> build\bdist.win-amd64\egg\bindsnet
copying build\lib\bindsnet_init_.py -> build\bdist.win-amd64\egg\bindsnet
byte-compiling build\bdist.win-amd64\egg\bindsnet\analysis\plotting.py to plotting.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\analysis\visualization.py to visualization.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\analysis_init_.py to init.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\conversion_init_.py to init.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\datasets\preprocess.py to preprocess.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\datasets_init_.py to init.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\encoding_init_.py to init.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\environment_init_.py to init.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\evaluation_init_.py to init.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\learning_init_.py to init.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\models_init_.py to init.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\network\monitors.py to monitors.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\network\nodes.py to nodes.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\network\topology.py to topology.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\network_init_.py to init.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\pipeline\action.py to action.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\pipeline_init_.py to init.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\preprocessing_init_.py to init.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet\utils.py to utils.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\bindsnet_init_.py to init.cpython-36.pyc
creating build\bdist.win-amd64\egg\EGG-INFO
copying bindsnet.egg-info\PKG-INFO -> build\bdist.win-amd64\egg\EGG-INFO
copying bindsnet.egg-info\SOURCES.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying bindsnet.egg-info\dependency_links.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying bindsnet.egg-info\not-zip-safe -> build\bdist.win-amd64\egg\EGG-INFO
copying bindsnet.egg-info\requires.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying bindsnet.egg-info\top_level.txt -> build\bdist.win-amd64\egg\EGG-INFO
creating dist
creating 'dist\bindsnet-0.2.2-py3.6.egg' and adding 'build\bdist.win-amd64\egg' to it
removing 'build\bdist.win-amd64\egg' (and everything under it)
Processing bindsnet-0.2.2-py3.6.egg
creating d:\anaconda3\lib\site-packages\bindsnet-0.2.2-py3.6.egg
Extracting bindsnet-0.2.2-py3.6.egg to d:\anaconda3\lib\site-packages
Adding bindsnet 0.2.2 to easy-install.pth file

Installed d:\anaconda3\lib\site-packages\bindsnet-0.2.2-py3.6.egg
Processing dependencies for bindsnet==0.2.2
Searching for pyproj<=1.9.5.2
Downloading https://github.com/jswhit/pyproj/archive/429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9.zip#egg=pyproj-1.9.5.2
Best match: pyproj 1.9.5.2
Processing 429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9.zip
Writing C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\setup.cfg
Running pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\setup.py -q bdist_egg --dist-dir C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\egg-dist-tmp-491v1zqx
using bundled proj4..
nad2bin.c
nad2bin.c(130): warning C4244: '=': conversion from 'double' to 'float', possible loss of data
nad2bin.c(131): warning C4244: '=': conversion from 'double' to 'float', possible loss of data
nad2bin.c(138): warning C4244: '=': conversion from 'double' to 'float', possible loss of data
nad2bin.c(139): warning C4244: '=': conversion from 'double' to 'float', possible loss of data
nad2bin.c(353): warning C4244: '=': conversion from 'double' to 'float', possible loss of data
nad2bin.c(354): warning C4244: '=': conversion from 'double' to 'float', possible loss of data
pj_malloc.c
Generating code
Finished generating code
executing C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\nad2bin lib\pyproj\data\alaska < datumgrid\alaska.lla
Output Binary File Format: ctable2
executing C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\nad2bin lib\pyproj\data\conus < datumgrid\conus.lla
Output Binary File Format: ctable2
executing C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\nad2bin lib\pyproj\data\FL < datumgrid\FL.lla
Output Binary File Format: ctable2
executing C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\nad2bin lib\pyproj\data\hawaii < datumgrid\hawaii.lla
Output Binary File Format: ctable2
executing C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\nad2bin lib\pyproj\data\MD < datumgrid\MD.lla
Output Binary File Format: ctable2
executing C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\nad2bin lib\pyproj\data\null < datumgrid\null.lla
Output Binary File Format: ctable2
executing C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\nad2bin lib\pyproj\data\prvi < datumgrid\prvi.lla
Output Binary File Format: ctable2
executing C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\nad2bin lib\pyproj\data\stgeorge < datumgrid\stgeorge.lla
Output Binary File Format: ctable2
executing C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\nad2bin lib\pyproj\data\stlrnc < datumgrid\stlrnc.lla
Output Binary File Format: ctable2
executing C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\nad2bin lib\pyproj\data\stpaul < datumgrid\stpaul.lla
Output Binary File Format: ctable2
executing C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\nad2bin lib\pyproj\data\TN < datumgrid\TN.lla
Output Binary File Format: ctable2
executing C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\nad2bin lib\pyproj\data\WI < datumgrid\WI.lla
Output Binary File Format: ctable2
executing C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\nad2bin lib\pyproj\data\WO < datumgrid\WO.lla
Output Binary File Format: ctable2
Compiling _proj.pyx because it changed.
[1/1] Cythonizing _proj.pyx
D:\Anaconda3\lib\site-packages\Cython\Compiler\Main.py:367: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9_proj.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
warning: no files found matching 'src*diff'
Traceback (most recent call last):
File "D:\Anaconda3\lib\site-packages\setuptools\sandbox.py", line 154, in save_modules
yield saved
File "D:\Anaconda3\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context
yield
File "D:\Anaconda3\lib\site-packages\setuptools\sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "D:\Anaconda3\lib\site-packages\setuptools\sandbox.py", line 45, in execfile
exec(code, globals, locals)
File "C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\setup.py", line 161, in
File "D:\Anaconda3\lib\site-packages\setuptools_init
.py", line 129, in setup
return distutils.core.setup(**attrs)
File "D:\Anaconda3\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "D:\Anaconda3\lib\distutils\dist.py", line 955, in run_commands
self.run_command(cmd)
File "D:\Anaconda3\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "D:\Anaconda3\lib\site-packages\setuptools\command\bdist_egg.py", line 172, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "D:\Anaconda3\lib\site-packages\setuptools\command\bdist_egg.py", line 158, in call_command
self.run_command(cmdname)
File "D:\Anaconda3\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "D:\Anaconda3\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "D:\Anaconda3\lib\site-packages\setuptools\command\install_lib.py", line 11, in run
self.build()
File "D:\Anaconda3\lib\distutils\command\install_lib.py", line 107, in build
self.run_command('build_ext')
File "D:\Anaconda3\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "D:\Anaconda3\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "D:\Anaconda3\lib\site-packages\setuptools\command\build_ext.py", line 78, in run
_build_ext.run(self)
File "D:\Anaconda3\lib\distutils\command\build_ext.py", line 308, in run
force=self.force)
File "D:\Anaconda3\lib\distutils\ccompiler.py", line 1031, in new_compiler
return klass(None, dry_run, force)
File "D:\Anaconda3\lib\distutils\cygwinccompiler.py", line 285, in init
CygwinCCompiler.init (self, verbose, dry_run, force)
File "D:\Anaconda3\lib\distutils\cygwinccompiler.py", line 129, in init
if self.ld_version >= "2.10.90":
TypeError: '>=' not supported between instances of 'NoneType' and 'str'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "setup.py", line 27, in
'https://github.com/jswhit/pyproj/archive/429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9.zip#egg=pyproj-1.9.5.2'
File "D:\Anaconda3\lib\site-packages\setuptools_init_.py", line 129, in setup
return distutils.core.setup(**attrs)
File "D:\Anaconda3\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "D:\Anaconda3\lib\distutils\dist.py", line 955, in run_commands
self.run_command(cmd)
File "D:\Anaconda3\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "D:\Anaconda3\lib\site-packages\setuptools\command\install.py", line 67, in run
self.do_egg_install()
File "D:\Anaconda3\lib\site-packages\setuptools\command\install.py", line 117, in do_egg_install
cmd.run()
File "D:\Anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 412, in run
self.easy_install(spec, not self.no_deps)
File "D:\Anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 654, in easy_install
return self.install_item(None, spec, tmpdir, deps, True)
File "D:\Anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 701, in install_item
self.process_distribution(spec, dist, deps)
File "D:\Anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 746, in process_distribution
[requirement], self.local_index, self.easy_install
File "D:\Anaconda3\lib\site-packages\pkg_resources_init_.py", line 774, in resolve
replace_conflicting=replace_conflicting
File "D:\Anaconda3\lib\site-packages\pkg_resources_init_.py", line 1057, in best_match
return self.obtain(req, installer)
File "D:\Anaconda3\lib\site-packages\pkg_resources_init_.py", line 1069, in obtain
return installer(requirement)
File "D:\Anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 673, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "D:\Anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 699, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "D:\Anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 884, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "D:\Anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 1152, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "D:\Anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 1138, in run_setup
run_setup(setup_script, args)
File "D:\Anaconda3\lib\site-packages\setuptools\sandbox.py", line 253, in run_setup
raise
File "D:\Anaconda3\lib\contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "D:\Anaconda3\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context
yield
File "D:\Anaconda3\lib\contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "D:\Anaconda3\lib\site-packages\setuptools\sandbox.py", line 166, in save_modules
saved_exc.resume()
File "D:\Anaconda3\lib\site-packages\setuptools\sandbox.py", line 141, in resume
six.reraise(type, exc, self._tb)
File "D:\Anaconda3\lib\site-packages\setuptools_vendor\six.py", line 685, in reraise
raise value.with_traceback(tb)
File "D:\Anaconda3\lib\site-packages\setuptools\sandbox.py", line 154, in save_modules
yield saved
File "D:\Anaconda3\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context
yield
File "D:\Anaconda3\lib\site-packages\setuptools\sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "D:\Anaconda3\lib\site-packages\setuptools\sandbox.py", line 45, in execfile
exec(code, globals, locals)
File "C:\Users\Jeff\AppData\Local\Temp\easy_install-5_yfou7x\pyproj-429a4fe6fa404ba1bc1c0a88bee68c1a30a9b6f9\setup.py", line 161, in
File "D:\Anaconda3\lib\site-packages\setuptools_init
.py", line 129, in setup
return distutils.core.setup(**attrs)
File "D:\Anaconda3\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "D:\Anaconda3\lib\distutils\dist.py", line 955, in run_commands
self.run_command(cmd)
File "D:\Anaconda3\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "D:\Anaconda3\lib\site-packages\setuptools\command\bdist_egg.py", line 172, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "D:\Anaconda3\lib\site-packages\setuptools\command\bdist_egg.py", line 158, in call_command
self.run_command(cmdname)
File "D:\Anaconda3\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "D:\Anaconda3\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "D:\Anaconda3\lib\site-packages\setuptools\command\install_lib.py", line 11, in run
self.build()
File "D:\Anaconda3\lib\distutils\command\install_lib.py", line 107, in build
self.run_command('build_ext')
File "D:\Anaconda3\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "D:\Anaconda3\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "D:\Anaconda3\lib\site-packages\setuptools\command\build_ext.py", line 78, in run
_build_ext.run(self)
File "D:\Anaconda3\lib\distutils\command\build_ext.py", line 308, in run
force=self.force)
File "D:\Anaconda3\lib\distutils\ccompiler.py", line 1031, in new_compiler
return klass(None, dry_run, force)
File "D:\Anaconda3\lib\distutils\cygwinccompiler.py", line 285, in init
CygwinCCompiler.init (self, verbose, dry_run, force)
File "D:\Anaconda3\lib\distutils\cygwinccompiler.py", line 129, in init
if self.ld_version >= "2.10.90":
TypeError: '>=' not supported between instances of 'NoneType' and 'str'
`

Inconsistency about refractory period with Brian Simulator

In my opinion, implementation of refractory period in Brian Simulation makes more sense.

Assume refrac_period = 1

Bindsnet
t=0: mem+=input --> neuron spikes --> refrac = 1
t=1: refrac = 0 --> mem+=input --> neuron spikes 

Ref:
https://github.com/Hananel-Hazan/bindsnet/blob/6d4a7a7980080556c79c34d2a603bada12dc78d0/bindsnet/network/nodes.py#L324-L334

Brian simulation
t=0: mem+=input --> neuron spikes --> t_spike = 0
t=1: t_current - t_spike < refrac_period  --> do nothing
t=2: t_current - t_spike > refrac_period --> mem+=input --> neuron spikes --> ...

Ref:
https://github.com/brian-team/brian2/blob/2b8e459798bd84be1c01e707d74993f2f260b5ce/brian2/groups/neurongroup.py#L121

Inefficient implementation of connection normalization in network.run

        for c in self.connections:
            self.connections[c].normalize()
            self.connections[c].normalize_by_max()
            self.connections[c].normalize_by_max_from_shadow_weights()

The above code adds three if statements per connection each time network.run is called. We can implement it more efficiently.

Weight update frequency of reward-modulated learning rule

for t in range(timesteps):
    for l in self.layers:
        # Update each layer of nodes.
        if isinstance(self.layers[l], AbstractInput):
            self.layers[l].forward(x=inpts[l][t])
        else:
            self.layers[l].forward(x=inpts[l])

        # Clamp neurons to spike.
        clamp = clamps.get(l, None)
        if clamp is not None:
            self.layers[l].s[clamp] = 1

    # Run synapse updates.
    for c in self.connections:
        self.connections[c].update(
            mask=masks.get(c, None), learning=self.learning, **kwargs
        )

Above is the part of the run() function in network class. I found that if I'm using MSTDPET learning rule, the network is updating the weight at every network-timestep with same reward value. I'm curious if it is intended or not because I expected the network to update at every agent-timestep (every time the agent interacts with the environment).

After some thinking I see that updating weight at every network-timestep is required for non-reward modulated learning rules like STDP. So I think there should be some condition that enables or disables the weight updating.

I'm thinking about adding new variable that checks whether the update is already done once or not. And this should be only activated for connections that follow reward-modulated learning rule.

issue about docker

Thanks for your awesome codeshare!

I am new to docker and I do not know how to use dockerfile.

When I run the command

docker image build .

I check docker images and find a image whose container id and image id is none, maybe it is the docker images I need. But I can not docker run the images.

So what should do to use it? Thanks a lot!

Slowdown when Compute with GPUs

I tested a performance of Peter Diehl's code over two different modes (with and without GPU) and found that there is a huge difference in runtime. Running the code on 8-core i-7 computer is twice faster than running the code with 4 decent GPUs server. Is there anyone who observes a similar runtime difference?

Easier way to control learning in networks

Right now you have to iterate through each connection's update rule and set it to NoOp, which is tedious. If you want to turn on learning, you have to restore the learning rule or reload it from disk.

Problem running the examples

Hello everybody,

first off thank you for your efforts, I really like your approach.

I cloned the up-to-date version of your repo and tried your examples on a fresh aws deep learning ami.
My pytorch version is 0.4.0

When I run the eth_mnist.py example I get the following error:

Traceback (most recent call last):
  File "minimal_mnist.py", line 30, in <module>
    pipeline.step()
  File "/home/ubuntu/src/bindsnet/bindsnet/pipeline/__init__.py", line 150, in step
    clamp=clamp)
  File "/home/ubuntu/src/bindsnet/bindsnet/network/__init__.py", line 265, in run
    self.connections[c].normalize()
  File "/home/ubuntu/src/bindsnet/bindsnet/network/topology.py", line 166, in normalize
    self.w *= self.norm / self.w.sum(0).unsqueeze(-1)
RuntimeError: The expanded size of the tensor (784) must match the existing size (400) at non-singleton dimension 0

Thanks for your help!

Best regards,
Florian

Consistency with nu and nu_post, nu_pre

There are some inconsistencies with nu_post and nu_pre. For instance, the Network is implemented using a tuple (nu_pre, nu_post) whereas some of the models use nu_pre, nu_post as separate parameters. (Also since they have kwargs it silently does nothing if one is used instead of the other)

I think we should try to keep it consistent.

Unless this is intentional?

Implementing conduction delays ?

Implementing a conduction delay for a connection, or a set of conduction delays to model directly an impulse response ? (for a connection)

cannot import name 'CurrentLIFNodes'

I just install bindsnet with the lastest version from github.
But when I want to run the example LIFNodes vs. CurrentLIFNodes, I get the error "cannot import name 'CurrentLIFNodes'". Then I go the file nodes, I also cannot find out the object CurrentLIFNodes.
Have the object "CurrentLIFNodes" already been deleted?

Import paths don't function the same

#100 changed the functionality of imports, not sure if intended. I think this breaks some of the examples.

Before:

from bindsnet import *
network = DiehlAndCook2015(100*100)

After:

from bindsnet import *
network = models.DiehlAndCook2015(100*100)

or

from bindsnet.models import *
network = DiehlAndCook2015(100*100)

General plot function

I'm having trouble visualizing how this function is going to be used. As @djsaunde has mentioned before, it'd be better to have it use plt.plot() but this brings up a couple of problems if we need to plot voltages and such since they use colormap. So, for example if we are trying to plot voltages of neurons throughout the simulation, how do we want this visualized? It was simple to do it using plt.matshow() but since we are trying to avoid using that for this function, it makes things a bit tricky.

Also, are we interested in something that plots traces for us? I don't believe there is anything that showcases that and I'd say it might be useful, especially when we are looking to introduce more relevancy of time into our tests.

Possible use of torch.multiprocessing

Consider the simulation loop in the Network.run() function:

# Simulate network activity for `time` timesteps.
        for t in range(timesteps):
->          for l in self.layers:
                # Update each layer of nodes.
                if isinstance(self.layers[l], AbstractInput):
                    self.layers[l].step(inpts[l][t], self.dt)
                else:
                    self.layers[l].step(inpts[l], self.dt)

                # Clamp neurons to spike.
                clamp = clamps.get(l, None)
                if clamp is not None:
                    self.layers[l].s[clamp] = 1

            # Run synapse updates.
->          for c in self.connections:
                self.connections[c].update(
                    reward=reward, mask=masks.get(c, None), learning=self.learning
                )

            # Get input to all layers.
            inpts.update(self.get_inputs())

            # Record state variables of interest.
            for m in self.monitors:
                self.monitors[m].record()
        
        # Re-normalize connections.
->      for c in self.connections:
            self.connections[c].normalize()

Where I've marked a ->, there might be an opportunity to use torch.multiprocessing. Since we do updates at time t based on network state at time t-1, all Nodes / Connections updates can be performed with a separate process (thread?) at once. Letting k = no. of layers, m = no. of connections, given enough CPU / GPU resources, the loops marked with -> would have time complexity O(1) instead of O(k), O(m) in the number of layers and connections, respectively.

I think it'd be good to keep around two (?) multiprocessing.Pool objects around, one for Nodes objects and another for Connection objects. Instead of statements of the form:

for l in self.layers:
    self.layers[l].step(...)

We might rewrite this as something like:

self.nodes_pool.map(Nodes.step, self.layers)

Here, nodes_pool is defined as an attribute in the Network constructor. This last bit probably won't work straightaway; we'd need to figure out the right syntax (if it exists).

This same idea can also be applied in the Network's reset() and get_inputs() functions.

Multiple inference on a single image in reservoir.py

Hello, i would like to use the network present in reservoir.py to compute, for a single image, the output probabilities associated to each class.
First of all, i perform the training of the network and then i save the weights and the state of the network with:

torch.save(model.state_dict(), 'weights_res.pkl')
network.save('Network.pt')

Hence, i comment the code relative to the training phase and i load the pre-trained network in this way:

network = load('Network.pt', learning=False)
model.load_state_dict(torch.load('weights_res.pkl'))

The purpose of my analysis is to execute the inference on a single image of the MNIST dataset multiple times. I don't need to evaluate the accuracy of all the test images. So, when i perform the test phase, i modify the code in the following way:

loader = zip(poisson_loader(images, time=250), iter(labels))

n_iters = 500
test_pairs = []
for i, (datum, label) in enumerate(loader):
if i ==IMAGE_NUMBER: #instead of if i % 100 == 0
print('Test progress: (%d / %d)' % (i, n_iters))

network.run(inpts={'I' : datum}, time=250)

test_pairs.append([spikes['O'].get('s').sum(-1), label])
...
...
for s, label in test_pairs:
outputs = model(s)
_, predicted = torch.max(outputs.data.unsqueeze(0), 1)
output_prob = nn.functional.softmax(torch.log(outputs-torch.min(outputs)+torch.max(outputs)))

I use outputs in order to compute the output probabilities of the image IMAGE_NUMBER. My problem is that i obtain strange results: the output probabilities are often wrong and different every time i perform the test. For example, considering the image number 0, the first of the test set, i obtain the following results:

predicted tensor([5])
outputs tensor([-0.7257, 0.4272, -0.1043, -0.0577, 0.2966, 0.9772, 0.2493, -0.1010, -0.7273, -0.2445], grad_fn=)
output probs log tensor([0.0575, 0.1251, 0.0939, 0.0967, 0.1175, 0.1574, 0.1147, 0.0941, 0.0574, 0.0857]

The label corresponding to the image number 0 is "7", so i don't understand why the probability relative to the class "7" is 0.0941, while the network recognizes the image (that is clearly a seven) as a five.
Maybe did i make a mistake during the test phase? Or is not this way correct to compute the output probabilities of an image? I hope i was clear to explain my aim and my problem.
Thank you in advance,
Giorgio

RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)

@djsaunde
When running the ETH.py without any parameters I'm getting the error:

Traceback (most recent call last):
  File "eth_no_inh_layer.py", line 169, in <module>
    spikes = network.run(inpts=inpts, time=time)
  File "/mnt/d/Downloads/git repos/bindsnet/bindsnet/network/__init__.py", line 145, in run
    self.monitors[monitor].record()
  File "/mnt/d/Downloads/git repos/bindsnet/bindsnet/network/__init__.py", line 200, in record
    self.recording[var] = torch.cat([self.recording[var], data], 1)
RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)

Use f-strings for string formatting

f-strings are new in Python 3.6 (which our library requires). They appear to be a better option for string formatting than the %-formatted strings we currently use, for readibility and performance reasons, and boasts some additional neat features (see the blog post linked above). Here's an example:

x = 2
y = 3
print(f'{x} plus {y} equals {x + y}')
# prints "2 plus 3 equals 5"

How to save the trained network weights ?

I trained the network and after 60000 steps the conv_mnist.py just exited. There was no weights file that I could find. How to test the network with our own inputs ? Or is it supposed to be trained via a notebook file and we keep it 'live' and give custom test images ?

Logistic regression evaluation method

I'd like to implement a logistic regression function in the bindsnet.evaluation module. This will be a sort of principled way to evaluate network's learning spiking behavior, at least if we are doing classification. I'll likely be using sklearn.linear_model.LogisticRegresssion to do this.

Moving Network objects to different device?

Is there an efficient/easy way to move a network from one device to another?

For instance if I instantiated a network on GPU 1 and I want to move it to GPU 2, how would I go about doing this? Obviously I can iterate through all Tensor objects in the Network and call .to(device), but is there a better alternative?

Supporting none-1.0 dt

I tried to change the value of dt from 1.0 to 0.1, but it didn't fit in the pipeline with other components. The error I encountered was difference of number of timesteps between encoded input's and network. Since encoded input's number of timesteps is time while network's number of timesteps is time/dt. I was thinking about resolving this issue, but I found that it requires redesigning almost every encoding functions. Any idea about this? Or am I missing something?

Thank you.

default use of `plot_spikes()` breaks existing code

Since the recent change to plot_spikes(), I've been getting errors of the form:

Traceback (most recent call last):
  File "random_network_baseline.py", line 91, in <module>
    output='R')
  File "/home/dan/code/bindsnet/bindsnet/pipeline/__init__.py", line 89, in __init__
    self.plot_data()
  File "/home/dan/code/bindsnet/bindsnet/pipeline/__init__.py", line 198, in plot_data
    self.s_ims, self.s_axes = plot_spikes(self.spike_record)
  File "/home/dan/code/bindsnet/bindsnet/analysis/plotting.py", line 91, in plot_spikes
    for layer in network.layers:
AttributeError: 'dict' object has no attribute 'layers'

Could we revert back to the default behavior of accepting a dictionary of spikes to plot? I like the option of plotting from a Network instance, but I liked the previous default option.

This is totally up for debate, by the way.

minimal_mnist.py results in AttributeError: 'numpy.ndarray' object has no attribute 'cpu'

On fresh setup, I get an error when running minimal_mnist example

➜ pip install -e .
...

➜ python --version
Python 3.6.5

➜ python minimal_mnist.py
Loading training images from serialized object file.

Loading training labels from serialized object file.

Traceback (most recent call last):
  File "minimal_mnist.py", line 17, in <module>
    encoding=poisson, time=350, plot_interval=1)
  File "/Users/amolk/work/bindsnet/bindsnet/pipeline/__init__.py", line 113, in __init__
    self.plot_data()
  File "/Users/amolk/work/bindsnet/bindsnet/pipeline/__init__.py", line 249, in plot_data
    self.voltage_record, plot_type=self.plot_type, threshold=self.threshold_value
  File "/Users/amolk/work/bindsnet/bindsnet/analysis/plotting.py", line 413, in plot_voltages
    v[1].cpu().numpy()[n_neurons[v[0]][0]:n_neurons[v[0]][1], time[0]:time[1]].cpu().numpy().T
AttributeError: 'numpy.ndarray' object has no attribute 'cpu'

How to use the rank_order encoding

Hi,
I'd like to ask, how to use the rank-order encoding.
I tried to use it for some network models, e.g., eth_mnist with the following lines of codes:

from bindsnet.encoding import rank_order_loader
data_loader = rank_order_loader(data=images, time=time, dt=dt)

However, it gave me the following error:

Traceback (most recent call last):
File "eth_mnist.py", line 132, in
sample = next(data_loader)
File "/.../bindsnet/lib/python3.6/site-packages/bindsnet/encoding/init.py", line 207, in rank_order_loader
yield rank_order(datum=data[i], time=time, dt=dt) # Encode datum as rank order-encoded spike trains.
File "/.../bindsnet/lib/python3.6/site-packages/bindsnet/encoding/init.py", line 173, in rank_order
assert datum >= 0, 'Inputs must be non-negative'
RuntimeError: bool value of Tensor with more than one value is ambiguous

What should I do to fix this?
Thanks in advance

Best regards

Implementation of different inhibition mechanisms

Hi,
I am trying to test the ideas from the "Unsupervised Learning with Self-Organizing Spiking Neural Networks" work (arXiv:1807.09374), however, it is still not working.
I would like to ask if any example of inhibition mechanisms presented in this paper available in Bindsnet?
Thank you.

Get error with using GPU for MNIST example

When I wanna run the example code with GPU:
python eth_mnist.py --gpu
I get the following error:

Loading training images from serialized object file.

Loading training labels from serialized object file.


Begin training.

Progress: 0 / 60000 (0.0000 seconds)
Traceback (most recent call last):
  File "eth_mnist.py", line 128, in <module>
    sample = next(data_loader)
  File "/home/elliot/anaconda3/envs/pytorch_1.0/lib/python3.6/site-packages/bindsnet/encoding/__init__.py", line 108, in poisson_loader
    yield poisson(data[i], time)  # Encode datum as Poisson spike trains.
  File "/home/elliot/anaconda3/envs/pytorch_1.0/lib/python3.6/site-packages/bindsnet/encoding/__init__.py", line 75, in poisson
    datum = np.copy(datum)
  File "/home/elliot/anaconda3/envs/pytorch_1.0/lib/python3.6/site-packages/numpy/lib/function_base.py", line 792, in copy
    return array(a, order=order, copy=True)
  File "/home/elliot/anaconda3/envs/pytorch_1.0/lib/python3.6/site-packages/torch/tensor.py", line 450, in __array__
    return self.numpy()
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

Plotting directly from network state

I would like to be able to call functions plot_voltages or plot_spikes (and the like), and pass a Network object (in additional to being able to pass dictionaries of voltages or spikes, as we currently do). The function should, by default, plot all layers' voltages / spikes / etc., depending on its header, but the user should be able to select a subset of layers to consider. The state variables should be read from Monitor objects.

How about making Agent class?

After reading and writing codes of BindsNET, I felt the current structure of the project is a little bit nonintuitive. Therefore I'm suggesting introducing new class called Agent which contains network, encoding, and action_function and maybe more. I think this would help us lot on organizing codes based on concrete conceptual model.

For example, related to we discussed in #151, action_function and network should share the same dt. Until now, this should have been synchronized manually. If we introduce Agent class, this issue is resolve naturally by sharing the Agent's dt value.

Actually I'm already working on it, and I expect it can be done without any serious additional coding. But it requires some changes in other modules. For example, most of the code in Pipeline's step() function would moved into Agent's step() function. And implementation of action function should also be changed a little.

I want to hear your opinion on this idea. Do you think introducing Agent class fits well with the purpose of BindsNET?

Implementation of MSTDP and MSTDPET

I'm currently working on reward-modulated STDP, and noticed that there are some discrepancies between the Florian 2007 paper and your MSTDP(ET) implementation. This was already mentioned in #140 and fixed (it seems) in #141, however these changes were reverted in #165. Was this intentional? I was planning on fixing everything with some PRs, but if there are reasons why you wouldn't want the original paper implementation in BindsNET and thus reverted the fixes, I would like to know 😄

Weight clamping ineffective

Connection gives you the possibility of clamping your weights to a range wmin-wmax: https://github.com/Hananel-Hazan/bindsnet/blob/f640fb602c6b550b4c9858ee533848fc3c02f135/bindsnet/network/topology.py#L136

However, this clamping doesn't seem to be very effective, as it happens in the super call to LearningRule:
https://github.com/Hananel-Hazan/bindsnet/blob/f640fb602c6b550b4c9858ee533848fc3c02f135/bindsnet/learning/__init__.py#L55

and weights are used in Network.get_inputs, which calls Connection.compute:
https://github.com/Hananel-Hazan/bindsnet/blob/f640fb602c6b550b4c9858ee533848fc3c02f135/bindsnet/network/topology.py#L173

This in itself wouldn't be a problem, if it weren't for the fact that weights can be increased/decreased beyond wmin/wmax between the clamping and the actual use, see for example here:
https://github.com/Hananel-Hazan/bindsnet/blob/f640fb602c6b550b4c9858ee533848fc3c02f135/bindsnet/learning/__init__.py#L535

Is this intended behaviour, am I missing something, or does this indeed seem wrong? I can do something about this, but I was wondering about your preferred way of solving this.

RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'mat2'

Hello, I ran the minimal_reservoir.py and got this error:
Loading training images from serialized object file.

Loading training labels from serialized object file.

Train progress: (50 / 500)
Train progress: (100 / 500)
Train progress: (150 / 500)
Train progress: (200 / 500)
Train progress: (250 / 500)
Train progress: (300 / 500)
Train progress: (350 / 500)
Train progress: (400 / 500)
Train progress: (450 / 500)
Train progress: (500 / 500)

Traceback (most recent call last):
File "/home/kevin/IMRA_le/3_Program/SNN/bindsnet/examples/mnist/minimal_reservoir.py", line 52, in
output = model(s)
File "/home/kevin/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/kevin/IMRA_le/3_Program/SNN/bindsnet/examples/mnist/minimal_reservoir.py", line 19, in forward
return self.linear(x)
File "/home/kevin/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/kevin/py36/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 67, in forward
return F.linear(input, self.weight, self.bias)
File "/home/kevin/py36/lib/python3.6/site-packages/torch/nn/functional.py", line 1354, in linear
output = input.matmul(weight.t())
RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'mat2'

Testing with conv_mnist.py

Hello,
I simply tested the conv_mnist.py to see what to be shown but eventually I got this message:

Loading training labels from serialized object file.
Begin training.
Progress: 0 / 60000 (0.0000 seconds)
Traceback (most recent call last):
File "conv_mnist.py", line 151, in
network.run(inpts=inpts, time=time) #, clamp=clamp)
File "/homes/lhoang/anaconda3/lib/python3.7/site-packages/bindsnet/network/init.py", line 250, in run
inpts.update(self.get_inputs())
File "/homes/lhoang/anaconda3/lib/python3.7/site-packages/bindsnet/network/init.py", line 189, in get_inputs
inpts[c[1]] += self.connections[c].compute(source.s)
File "/homes/lhoang/anaconda3/lib/python3.7/site-packages/bindsnet/network/topology.py", line 153, in compute
post = s.float().view(-1) @ self.w + self.b
RuntimeError: Expected tensor to have size 400 at dimension 1, but got size 4 for argument #2 'batch2' (while checking arguments for bmm)

As I am new to PyTorch, is there any solution to fix the problem?
Thank you.

supervised_mnist.py error

I have built library from source.
When running python supervised_mnist.py --gpu I am getting following error.

Loading training images from serialized object file.

Loading training labels from serialized object file.

Begin training.

Progress: 0 / 50000 (0.0000 seconds)
Progress: 10 / 50000 (1.6048 seconds)
Progress: 20 / 50000 (1.7189 seconds)
Progress: 30 / 50000 (1.5952 seconds)
Progress: 40 / 50000 (1.5939 seconds)
Progress: 50 / 50000 (1.5914 seconds)
Progress: 60 / 50000 (1.5920 seconds)
Progress: 70 / 50000 (1.5892 seconds)
Progress: 80 / 50000 (1.5927 seconds)
Progress: 90 / 50000 (1.5949 seconds)
Progress: 100 / 50000 (1.5965 seconds)
Progress: 110 / 50000 (1.5923 seconds)
Progress: 120 / 50000 (1.5900 seconds)
Progress: 130 / 50000 (1.5927 seconds)
Progress: 140 / 50000 (1.5915 seconds)
Progress: 150 / 50000 (1.5953 seconds)
Progress: 160 / 50000 (1.5927 seconds)
Progress: 170 / 50000 (1.5882 seconds)
Progress: 180 / 50000 (1.5892 seconds)
Progress: 190 / 50000 (1.5924 seconds)
Progress: 200 / 50000 (1.5946 seconds)
Progress: 210 / 50000 (1.5924 seconds)
Progress: 220 / 50000 (1.5938 seconds)
Progress: 230 / 50000 (1.5915 seconds)
Progress: 240 / 50000 (1.5905 seconds)
Progress: 250 / 50000 (1.5894 seconds)
Traceback (most recent call last):
File "supervised_mnist.py", line 123, in
% (accuracy['all'][-1], np.mean(accuracy['all']), np.max(accuracy['all'])))
File "/numpy/core/fromnumeric.py", line 3118, in mean
out=out, **kwargs)
File "
/numpy/core/_methods.py", line 85, in _mean
ret = ret.dtype.type(ret / rcount)
AttributeError: 'torch.dtype' object has no attribute 'type'

Issue about 'example/mnist/eth_mnist.py' using --gpu

Hi all, I'm new about the project and just clone the latest master-branch on my own machine.

I installed the pkg bindsnet with
pip install ., and tried to run the example code 'eth_mnist.py' with gpu mode:
python eth_mnist.py --gpu
but error occurs during runtime:

Loading training labels from serialized object file.
Begin training.
Progress: 0 / 60000 (0.0000 seconds)
Progress: 10 / 60000 (5.3936 seconds)
Progress: 20 / 60000 (5.3005 seconds)
Progress: 30 / 60000 (5.2392 seconds)
Progress: 40 / 60000 (5.2304 seconds)
Progress: 50 / 60000 (5.2242 seconds)
Progress: 60 / 60000 (5.2055 seconds)
Progress: 70 / 60000 (5.1790 seconds)
Progress: 80 / 60000 (5.1758 seconds)
Progress: 90 / 60000 (5.1704 seconds)
Progress: 100 / 60000 (5.2317 seconds)
Progress: 110 / 60000 (5.1757 seconds)
Progress: 120 / 60000 (5.5117 seconds)
Progress: 130 / 60000 (5.5386 seconds)
Progress: 140 / 60000 (5.1558 seconds)
Progress: 150 / 60000 (5.1877 seconds)
Progress: 160 / 60000 (5.1674 seconds)
Progress: 170 / 60000 (5.1508 seconds)
Progress: 180 / 60000 (5.1378 seconds)
Progress: 190 / 60000 (5.1910 seconds)
Progress: 200 / 60000 (5.1521 seconds)
Progress: 210 / 60000 (5.1740 seconds)
Progress: 220 / 60000 (5.1459 seconds)
Progress: 230 / 60000 (5.1683 seconds)
Progress: 240 / 60000 (5.1265 seconds)
Progress: 250 / 60000 (5.1413 seconds)

Traceback (most recent call last):
  File "eth_mnist.py", line 128, in <module>
    % (accuracy['all'][-1], np.mean(accuracy['all']), np.max(accuracy['all'])))
  File "/usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py", line 2920, in mean
    out=out, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/numpy/core/_methods.py", line 85, in _mean
    ret = ret.dtype.type(ret / rcount)
AttributeError: 'torch.dtype' object has no attribute 'type'

How do I fix it? My environment? Or should it be the code-version?

Could you give me some hint for running mnist example code using GPU?

Iteration stopped

Hi,
I have encountered an issue of running the file "eth_mnist.py".
I used a loop with three iterations to train the network three times. It worked with the small number of training set. However, the training process stopped right after the first iteration when 60000 training set was selected. The attached figure depicts the message from the shell.
screenshot at 2019-02-12 10-10-32
The issue might come from this syntax below.
sample = next(data_loader)
Regarding to the setup, I changed the number of excitatory neurons up to 400.
Is there anyone who has the same issue like me?
Thank you.
BR

Integrating n-gram

@hqkhan, when you have the time, can you put together a pull request with the ngram functionality in it?

Implementation of MSTDPET is wrong?

First of all, thank you for making this open-source library. I really appreciate the intention and approach of this project. And I hope I could contribute something.

Maybe this problem would be more proper to be handled by pull request or something. But I'm not really used to github's contributing process or something. Please tell me if I'm doing something wrong.

Here's the thing. I looked inside the MSTDPET learning rule and I found its implementation is different from Florian's original learning rule. Current implementation is as follows.
self.p_plus = -(self.tc_plus * self.p_plus) + a_plus * source_x
self.p_minus = -(self.tc_minus * self.p_minus) + a_minus * target_x

However, in Florian's paper, it's described as
image
or
image

Maybe someone who implemented it was confused between dP/dt and just P.

Thank you for reading!

[Bug Maybe?][model: IncreasingInhibitionNetwork][Weight Initialization]

hi, I'm looking into the code for model IncreasingInhibitionNetwork, and is a bit confused about line 228-229:

inhib = -self.start_inhib * np.sqrt(euclidean([x1, y1], [x2, y2]))
w[i, j] = min(self.max_inhib, inhib)

I think it's used for initializing the weights w of connection, depend on the distance between two neurons.

for i in range(self.n_neurons):
            for j in range(self.n_neurons):
                if i != j:
                    x1, y1 = i // self.n_sqrt, i % self.n_sqrt
                    x2, y2 = j // self.n_sqrt, j % self.n_sqrt

                    inhib = -self.start_inhib * np.sqrt(euclidean([x1, y1], [x2, y2]))
                    w[i, j] = min(self.max_inhib, inhib)

        recurrent_output_conn = Connection(
            source=self.layers['Y'], target=self.layers['Y'], w=w, wmin=-self.max_inhib, wmax=0
        )
        self.add_connection(recurrent_output_conn, source='Y', target='Y')

As the default initial value for start_inhib=1.0, max_inhib=100, but we expect the weights to be
negative numbers; you set a '-' for inhib and forget to set for self.max_inhib, that inhib will be negative and self.max_inhib is still positive, and this will make min(self.max_inhib, inhib)always equal toinhib.

In this way you might want
w[i,j] = max(-self.max_inhib, inhib) ,

or maybe change both line to

inhib = self.start_inhib * np.sqrt(euclidean([x1, y1], [x2, y2]))
w[i, j] = -min(self.max_inhib, inhib)

I don't know whether my understanding is correct, could you have a look at it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.