GithubHelp home page GithubHelp logo

spikeycns / spikey Goto Github PK

View Code? Open in Web Editor NEW
6.0 5.0 0.0 1.28 MB

Malleable spiking neural network framework and training platform.

License: MIT License

Python 100.00%
neural-network spiking-neural-networks spikey reinforcement-learning stdp rlstdp florian izhikevich rmstdp hebbian-learning

spikey's Introduction

Spikey

Spikey is a malleable, ndarray based spiking neural network framework that uses Ray as a training platform. It contains many pre-made components, experiments and meta-analysis tools(genetic algorithm). It's regularly tested with Python 3.7-3.9 in Linux and Windows. Expect to post bugs or suggestions in the issues tab :)

Table of Contents

Spiking Neural Networks

What is a Spiking Neural Network?

Spiking neural networks are biologically plausible neural models able to understand and respond to their environments intelligently. These are clusters of spiking neurons interconnected with directed synapses. Unlike other neural models, SNNs are naturally capable of reasoning about temporal information making them apt for tasks like reinforcement learning and language comprehension.

Spiking neurons are simple machines that carry an internal charge that slowly decays over time, which increases sharply when electricity flows through a synapse into it. When its internal potential surpasses some firing threshold, the neuron will spike, releasing the energy it had stored, then it will remain quincient for the duration of its refractory period. This simple behavior allows groups of spiking neurons to replicate and strategize about both the spatial and temporal dynamics of their environment.

Information comes into and flows through the network encoded as spike patterns in terms of firing rates, firing orders and population codes. A special batch of a network's neurons serve solely to translate sensory information about the outside world into spike trains that the rest of the group is about to reason with; these are the network inputs. Another separate subset of neurons are designated as outputs which behave normally, but their spikes are interpreted by a readout function that dictates the consensus of the crowd. As a whole, a network consists of sensory inputs, body neurons for processing and actor neurons which altogether create an agent that works to exploit its environment.

Learning in an artificial neural networks is largely facilitated by an algorithm that tunes synapse weights. The weight of a synapse between two neurons modulates how much current goes from the pre- to the post-synaptic neuron, ie neuron_b.potential += neuron_a.fire_magnitude * synapse.weight. The learning algorithm used to tune the network must be able to handle the temporal relationship between pre and post neuron fires, thus variants of the hebbian rule are commonly used. The hebbian rule acts on a per synapse basis, only considering the firing times of the specific synapse's single pre- and post-synaptic neurons. If the input neuron tends to fire before the output neuron, the algorithm will increase the synaptic weight between the two. Otherwise if the opposite pattern holds the weight will decrease. In aggregate, the network learns to detect patterns of the stimulus it is trained on at all scales.

A complex learning process emerges from these many simple interactions, with spatial and temporal pattern detection occurring at multiple scales. Spiking neural networks can naturally comprehend events playing out over time making for ideal candidates on markov decision processes in reinforcement learning and sequence learning in language comprehension tasks alike. Spiking neural networks can naturally comprehend events occurring over time making them ideal candidates for markov decision processes(reinforcement learning) and sequence based learning(language comprehension) alike. Much of the groundwork for reinforcement learning tasks with SNNs has already been published, see Florian(2007) below and it with other RL paper replications in examples/.

Further Reading

Package Overview

----------  -----------  ---------  -----
| Neuron |  | Synapse |  | Input |  | ...
----------  -----------  ---------  -----
       \         |         /
         \       |       /
--------   -------------
| Game |   |  Network  |
--------   -------------
   |            /
   |           /
-----------------
| Training Loop |
-----------------
        |
----------------------
| Aggregate Analysis |
----------------------
    ^       |
    L_______|

Spikey is a spiking neural network framework and training platform. It provides the components necessary to assemble and configure spiking neural networks as well as the tools to train them. There are enough pre-built, parameterized modules to execute many useful experiments out of the box. Though, it will likely be necessary to write some code in order to pursue a novel goal.

It is important that this platform remains malleable in order to support users pushing a broad frontier of research, and fast and scaleable enough to allow for large networks and meta-analysis. Spikey is written purely in Python with maximal use of Numpy under the paradigm of array programming. The malleability of this platform primarily comes from the consistently used flexible design patterns and well defined input/output schemes. The speed of this platform is largely achieved with Numpy, benchmarking and modular code to make the process of optimization straightforward.

Below is a high level overview of the pieces of the framework and tools provided by the training platform. See usage example here.

Network

The Network object is the core of the spiking neural network framework. This module serves as an interface between the environment and the components of the network. It is configured with a list of parts[a type of synapse, neuron, ...] and a parameter dictionary shared among the given parts.

Network parts define how the network will respond to the environment and learn based on its feedback. These are the inputs, neurons, synapses, rewarders, readouts and weight matricies. Each part facilitates the work of the whole group of said parts, ie, the network only interacts with one neuron part which serves as an interface for any number of neurons. This is where the array programming comes into play, a large amount of work can be done quickly with the smallest amount of code using numpy. Numpy also scales better than pure python.

Find a usage example here. In order to override the functionality of the network see, extending functionality. Network implementation here.

Game

A game is the structure of an environment that defines how agents can interact with said environment. In this simulator they serve as an effective, modular way to give input to and interpret feedback from the network.

Multiple games have already been made, located in spikey/games. Find a usage example here. In order to create new games, see extending functionality. Game implementations here.

Training Loop and Logging

Spikey uses Ray Train, PyTorch version for simple and distributed training, see our tutorial.

We use Ray's logging tools, see example usage in our tutorial.

Hyperparameter Tuning

It is possible to execute hyperparameter tuning with Spikey using Ray. See the Ray Tune docs here. See our hyperparameter tuning example, here.

Installation

This repository is not yet on PyPi so it must be cloned and installed locally. It only needs to be installed when the repo newly cloned or moved, not after code edits!

git clone https://github.com/SpikeyCNS/spikey
cd spikey
pip install -e .

Run Tests

python -m unittest discover unit_tests/

Getting Started

Many more examples including everything from simple network experiments to hyperparameter tuning can be found in examples/.

Contributing

See our contributing guide.

spikey's People

Contributors

coledie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

spikey's Issues

Consistent naming in docs and examples.

Currently there is no standard variable / class naming scheme for all the examples and usage docstrings. This will cause confusion for people trying to copy pieces of code to use and understand from.

** Only include essential fixes. **

  • Callback and TrainingLoop both use name experiment often, make only TL use it, and use it everywhere. Callback always named callback.
  • Rename network._template_parts to just parts or something dang

Push reader functionality to callback.

Currently callback and reader handle much the same functionality but need to be used much differently. Ideally it should not matter whether you have access to a callback or a Reader, any viz or analysis should work exactly the same.

  • New log file format, opt for more compressed(eg obj) with ability to selectively read in sections eg just results and only pull specific pieces from info when needed. Log files need to be easily and safely shareable.
  • Make way to process results from multiple callbacks same way reader does log files.
  • Make reader template ExperimentCallback and both have exactly the same interface.
  • Allow callback to store info on file so all info dicts dont need be in memory.
  • - For future log file just being pickled callback, I really could easily do this via the pickle customizations w/ getstate and setstate

Refactor population, series and metarl games.

As they are now, the population, series and metarl games could use refactoring for ease of use and generalization.

  • Add seed, close and other gym env functions to MetaRL
  • Make easier to configure series
  • Simplify populations usage of meta games.
  • Trim many parameters in Population
  • Remove all EvolveNetwork params except those specifically for the game, take rest in as pre-configured TrainingLoop or similar.
  • Overhaul EvolveNetwork tracking_getter, aggregate_fitness, win_fitness system.

Florian2007 rate XOR experiments far too slow.

Currently, the rate XOR experiments in florian2007.ipynb are far too slow at about 240s / run. This needs to be ~doubled.

  • Get time to <= 120s.

Improve speed of STDP implementation.

  • Remove decay multiplier and put all into the where
  • Move clip to weights and speed up // weights get clipped too many times
  • Force spike_log and inhibitory types to be respected
  • Reorganize STDP math for fastest matrix matrix
  • Switch from idx lists to masks

Parameter Normalization

  • Normalize spike_log so that spike_log[-1] always is ??? and is of length ??? in network, synapse, ...
  • Normalize inhibitory location parameter usages(bool with 1=? and 0=?) in neurons, synapses, weights, ...
  • Streamline the bool and float copy of spike log in network.tick - ensure allows variable spike magnitudes for readouts and rewards

NECESSARY_KEYS dictionary lacks ability for constraints, default values and warnings.

Currently implementing NECESSARY_KEYS(and the like) is done with dictionaries {key: description} which lacks much of the data needed. A new data structure should be created to suit this need. Requires #6.

  • Develop dictionary variant or value that is readable and provides helpful functionalit.y
  • Still support dictionary as necessary keys so its easy for customs.
  • Allow default key values.
  • Make NECESSARY_KEYS key setter print all missing values when one is gone.
  • Potentially make it so values entered for eg processing_time is linked between network, neurons, ... in order to allow modifiers to globally affect the value. Otherwise provide method for modifier to impact all.
  • Redo Populations.n_agents and neuron.polarities definition in init to use new necessary_keys functionality.
  • Command line tool to view necessary keys for a Module would be nice.
  • Move the network print all parts to work for all modules (eg TrainingLoop).
  • Redo the print functions to include defaults, constraints, ... for each parameter.
  • Refactor parameter printing to Module base class, at least put all generalizable code there and ensure works for game and trainingloop
  • Network.list_keys should first be renamed to list_keys or print_keys(both?) and it should give dict that can be just copied and filled out.

Improve testing checklist.

  • Delete pytest requirements(add gym), delete test.conf, update test running docs, remove all NBVAL usage
  • Base all tests off custom base class w/ runall functionality - one for modules, one general
  • Figure how to make individual implementations support custom testing, eg one base test, each template and selectively super() then append
  • Module testing template test each and every module subclass on each base module fn
  • All different input schemes from metapath dir skip.
  • Ensure all modules pickleable(add to base class and default runall)
  • Network - Test different initializations and override orders
  • Inputs
  • Modifier
  • Neuron
  • Readout
  • Reward
  • Synapse
  • Weight
  • Game(test gym) - Test different initializations and override orders
  • RL
  • MetaRL
  • TrainingLoop - Test different callback passing in methods
  • Callback
  • Logger, MultiLogger
  • Reader, MetaReader - breaks on bool arrays, empty and single item lists
  • Sanitize
  • Serialize
  • Backends
  • Series(Mock backend)
  • Population(Mock backend)
  • GenotypeCache
  • CI pipeline w/ 3.7, 3.8 linux - run on merge.
  • Make tests a PR requirement.

Redo docstrings checklist.

Format

"""
Short description

Parameters
------------
param: type
    Description of variable
...

Returns
---------
type Description of return

Usage
-------
```python
...
\```
"""

If is an ndarray, use ndarray[dim1, dim2](dtype if applicable).

  • Not much of existing documentation is this specific but should be moving forward.

Top of file docstring be copy of most relevant object docstring.

Doc Checklist (All in one PR)

  • Setup.py description outdated.
  • Setup.py version "DEVELOPMENT" overwritten in main, remove but test install after.
  • core/
  • experiments/
  • games/RL/
  • games/MetaRL/
  • logging/
  • viz/
  • meta/
  • snn/input
  • snn/modifier
  • snn/neuron
  • snn/readout
  • snn/reward
  • snn/synapse
  • snn/weight - Note matrix must be ma.
  • snn/network - Clear there is multiple network options.
  • Link to relevant sections of readme in callback, trainingloop, logger, reader, ...
  • np.ndarray type hints -> specific ndarray dtypes

Type Hint Checklist (All in one PR)

  • core/
  • experiments/
  • games/RL/
  • games/MetaRL/
  • logging/
  • viz/
  • meta/
  • snn/input
  • snn/modifier
  • snn/neuron
  • snn/readout
  • snn/reward
  • snn/synapse
  • snn/weight - Note matrix must be ma.
  • snn/network - Clear there is multiple network options.

Normalize module implementations across repo.

Near every module in the repo uses NECESSARY_KEYS and other similar functionality, currently each module also has its own implementation of the helper functions. Ideally each module should template some spikey defined type that carries this shared functionality. Also data types like the spike_log and inhibitory listings should be normalized across the module.

  • Add module base to spikey/, all classes template and call super.init.

  • Module contain NECESSARY_KEYS, and autoloading in init.

  • Module support extending NECESSARY_KEYS so dont have to deepcopy and update.

  • Support NECESSARY_KEYS likes across repo.

  • RL, MetaRL use NECESSARY_KEYS.

  • Network and game use self.params / self._.. in same ways.

  • TrainingLoop, MetaRL, RL should use self._...

  • TrainingLoop use NECESSARY_KEYS.

  • Have template for meta backends.

  • Ensure all parts have reset function - input, readouts, modifiers... and use in network.reset.

  • Make callback not require training_params, should be optional.

  • MetaRL needs gym.Env compatability.

  • Implement gym wrapper for MetaRL.

Implement documentation autogeneration.

Sphinx will be used to automatically generate an html website containing spikey documentation. Website will be in the github pages repo.

  • Checkout readthedocs, likely could be a lot simpler than what we do right now? Though would it allow for enough control?
  • Generate documentation on github pages.
  • Move index and module index links to side bar
  • Cleanup module index.
  • Make homepage with all relevant links - to readme sections, ....
  • List necessary_keys in module autodocs.
  • Ensure all module autodocs correctly formatted.
  • All modules link back to source code.

Pre-public release refactoring.

TrainingLoop Refactor

  • Add trainingloop.log()
  • Switch callback to init parameter.
  • TrainingLoop support passing uninitialized callback.
  • Initialize callback in init and just reset it every run.
  • TrainingLoop use NECESSARY_KEYS like network, pass keys to its parts.
  • TrainingLoop use **params instead of params + same for update.
  • Ensure trainingloop doesnt break w/ callback in experiment_params.
  • Network & synapse need train and eval modes - maybe make a whole module trait they can selectively?
  • Update all uses of traningloop

Input/Neuron Normalization

  • Firing threshold should be a part of neurons, replace ge and just make alias for something else.
  • Merge neuron and fire since always use fire then update
  • Give input same sturcture as neuron ie both .updates do same thing, call does same thing

General Refactoring

  • - Cleanup spike_raster viz so parameterization simpler like game_states stuff
  • Make backends support args and kwargs

Callback Refactor

  • Support callback binding kwargs so can do .binding(return=x), then tracker runs on "return"
  • Callback doesnt work unless binding defined
  • Callbacks currently only work well with RLNetwork or ContRLNetwork, not both.
  • Json serialize raise warning when skipping value, allow suppress
  • Callback requires network reset to be called to get next ep list goin, could just add warning if attempt to give anything before its called? If no changes make a direct way to update that without calling network reward and make network reward call it for pure abstraction
  • Reader just accept giving filename to arg and just giving one filename instead of list to param
  • Reader support member variables like reader.network or reader.results

Keys

  • Run through config key names and ensure clearly denote their purpose(snn, games, meta).
  • Rename theta_dot_noise, ... in CartPole to singify they are starting state ranges.
  • EvolveNetwork tracking_getter -> fitness_getter
  • Rename input 'mapping'
  • Punish mult very confusing that it needs to be negative, update docs making it not
  • Update docs to reflect / ensure that STDP can be longer than processing time
  • Inputs with mapping support giving functions or dictionaries
  • Organize printing of necesssary_keys so its obvious what has defaults and what does not, also make alphabetical for consistent ordering.

Documentation

  • README getting started just link to examples no example code.
  • malleable, ndarray based 3x
  • Add readmes to all spikey subfolders eg game/ snn/ ...
  • Most file docstrings dont do good job explaining whats in file.
  • Population usage still has some EvolveFlorian.
  • Update README's old NECESSARY_KEYS usage and other sections may need quick update after rest of refactoring done
  • Update version to .5

Remove Repeated Code

  • Run through and refactor out large sections of repeated code.
  • PR: Pull STDP implementation out of all synapses to function in template.
  • PR: Merge and generalize input.rate_map & static_map.
  • Make network tick loop format with all implementation code put into specific function, currently each network overrides full tick.

Importing

  • Support spikey.Key, spikey.Module

Tutorial

  • Cleanup
  • CustomWeight extend keys using Key object.
  • Network / other headers describe how it fits in relative to what they just learned, eg network makes it so you dont ahve to do that for every network unless you want to.
  • Game section put trainingloop first to show purpose
  • Tutorial show different reader choice thats not bugged

ContinuousRLNetwork should not be hardcoded for TDError usage.

Currently ContinuousRLNetwork is hardcoded for TDError usage which it should not be.

  • Figure way to parameterize ContinuousRLNetwork to generalize it.
  • Make so TDError can get critic_spikes from other methods.
  • Remove TDLoop parts giving critic_spikes (potentially remove it if no longer needed).
  • Replace FlorianNetwork with usage of ContinuousRLNetwork and delete FlorianNetwork.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.