neuromorphs / nir Goto Github PK
View Code? Open in Web Editor NEWNeuromorphic Intermediate Representation reference implementation
Home Page: https://neuroir.org/docs
License: BSD 3-Clause "New" or "Revised" License
Neuromorphic Intermediate Representation reference implementation
Home Page: https://neuroir.org/docs
License: BSD 3-Clause "New" or "Revised" License
Average pooling is commonly used in deep spiking networks and should be added.
The present formalism insists on full tensor representations to disambiguate shape. We should consider allowing a shorthand notation to allow defining scalars, but then require the shape to be defined. Something like
nir.Linear(weights=1, bias=0, shape=(10, 10))
vs. the complete present formulation
nir.Linear(weights=np.ones(10, 10), bias=np.ones(10))
Hiya,
Thanks a lot for NIR, it's very cool work.
The pyproject.toml specifies the license as Apache (
Line 22 in 416e8c2
Many thanks,
Tobi
PS: Background - I'd like to package NIR on conda-forge.
The name of the method NIRGraph.from_list
suggests that a list is expected as input:
graph = NIRGraph.from_list([node_a, node_b, node_c])
In reality however it expects the nodes to be passed as individual arguments:
graph = NIRGraph.from_list(node_a, node_b, node_c)
I suggest the method to be either renamed to something like "Compose" or "Sequential", or to change it such that it accepts a list.
To distinguish differences in the NIR primitives across versions, we should encode the version number in the NIR graph as well as in the NIR file format.
The current PyPi release is ~15mb which is far too much for a source code repo. We should remove unnecessary data that's not needed for the release, such as the paper code/data.
Currently when a list of nodes is returned, then expectation is that the returned graph is sequential. This might not always be true. It would be more complete to return another graph that can then be embedded into the graph.
Hello,
I am currently using the nir_to_lava.py script to deploy and test my snnTorch network on Loihi hardware using NIR graphs. So far, I have run the test with Loihi2SimCfg as it was done in lava exemple : https://neuroir.org/docs/examples/lava/nir-conversion.html#nir-to-lava-dl, and it worked fine. However, since I am trying to deploy the model on the actual hardware, I need to use the 'fixed_pt' configuration. I noticed some parameters has been shifted during the conversion, such as the LIF's threshold, current/voltage decays, etc . Could you highlight a little bit about these choices? How can I make sure if the converted model parameters are coherent ?
Thank you for your attention.
Best regards,
RIHANE Ossama
Computer Vision Intern at PROPHESEE
Currently, pull requests such as #30 are failing, because of Lint problems with existing code in main:
Error: example/nengo/nir-lorentz.py:182:89: E501 Line too long (94 > 88 characters)
Error: example/nengo/nir-test.py:67:89: E501 Line too long (94 > 88 characters)
Error: example/sinabs/import_to_sinabs.py:2:20: F401 torch.nn
imported but unused
Error: example/sinabs/import_to_sinabs.py:3:25: F401 sinabs.layers
imported but unused
I suggest locking the main branch from direct commits to enforce the use of pull requests along with the checks.
We could create more meaningful names instead of using the index as name:
For example similar to how Tensorflow handles that: It uses the type of node to create the names. For example:
The first instance of Dense will be called "dense", the second "dense_1" etc.
Currently, the nir.__version__
does not seem to work. We should support that.
Two linearities in sequence can, in theory, be collapsed by a matrix multiplication (if we get rid of the bias term).
However, this is only desirable when the resulting matrix doesn't get too big!
For parsing NIR Graphs (especially when allowing multiple inputs and outputs), it might be useful to add .inputs and .outputs properties to the NIRGraph, such as follows:
@dataclass
class NIRGraph(NIRNode):
"""Neural Intermediate Representation (NIR) Graph containing a number
of nodes and edges.
A graph of computational nodes and identity edges."""
nodes: typing.Dict[str, NIRNode] # List of computational nodes
edges: Edges
@property
def inputs(self):
return {name:node for name,node in self.nodes.items() if isinstance(node, Input)}
@property
def outputs(self):
return {name:node for name,node in self.nodes.items() if isinstance(node, Output)}
If we have this example graph definition:
inner_model = nir.NIRGraph(
nodes={
"in1": nir.Input(np.array([4, 5, 2])),
"in2": nir.Input(np.array([4, 5, 2])),
"flat": nir.Flatten(0),
"out1": nir.Output(np.array([20, 2])),
"out2": nir.Output(np.array([20, 2])),
},
edges=[
("in1", "flat"),
("in2", "flat"),
("flat", "out1"),
("flat", "out2")
]
)
outer_graph = nir.NIRGraph(
nodes = {
"in": nir.Input(np.array([4,5,2])),
"inner": inner_graph,
"out": nir.Output(np.array([20,2]))
},
edges=[
("in", "inner.in1"),
("in", "inner.in2"),
("inner.out1", "out"),
("inner.out2", "out")
]
)
This allows retrieval of inputs or outputs, for example for the upper graph definition:
inp0 = outer_graph.nodes['inner'].inputs['in0']
Or to iterate over all inputs/outputs:
for name, outp_node in outer_graph.nodes['inner'].outputs.items():
pass
This issue tracks example code for each of the frameworks
The comparison of two nodes fails:
a = nir.Input(shape=np.array([1]))
b = nir.Input(shape=np.array([1]))
a == b # "ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()"
The reason for that is that dataclasses generate a default comparison eq() function which compares all attributes of the class, but does not understand how to properly compare numpy arrays. For the case of the Input node, the resulting eq() does something like this:
def __eq__(self, other):
return self.shape == other.shape
Since the .shape is typically an array, it would rather need to do something like:
def __eq__(self, other):
return (self.shape == other.shape).all()
However I am still not sure whether we want that behavior. Because that means that:
a = nir.Input(shape=np.array([1]))
b = nir.Input(shape=np.array([1]))
a == b # True
This could be avoided if each node had its name also as an attribute.
The complete solution could be:
@dataclass(eq=False)
class NIRNode:
"""Base superclass of Neural Intermediate Representation Unit (NIR).
All NIR primitives inherit from this class, but NIRNodes should never be
instantiated.
"""
name: str
def __eq__(self, other):
print(self, other)
if type(self) != type(other):
return False
for name,field in self.__dict__.items():
print(name,field)
if name not in other.__dict__.keys():
return False
other_field = other.__dict__[name]
print(other_field)
if type(field) != type(other_field):
return False
if isinstance(field, np.ndarray):
if field.shape != other_field.shape or field.dtype != other_field.dtype:
return False
if not (field == other_field).all():
return False
else:
if field != other_field:
return False
return True
Since this is a breaking change (all nodes now have a required name attribute), and requires changing all tests, I will only create the pull request if everyone agrees on this type of solution.
As we have some nodes that can not define their own input_type (Conv, Flatten, SumPool), the nir.from_list should have the option of giving an input_type (or input_shape) to allow the creation of fully defined graphs.
The from_list would then create the ir.Input with this given shape and could infer the full shapes if wanted.
The notebooks in the docs seem to error out because dependencies haven't been installed. E. g. in the Lava notebook where nir
doesn't exist: https://neuroir.org/docs/examples/lava/nir-conversion.html
Solution: go through the github workflow and ensure that the jupyter book environment has the necessary dependencies installed.
Conv1d and Conv2d currently have the input_shape argument. This is however not necessary, the shapes can be fully determined from the output_shape of the previous nodes.
Therefore I would suggest to remove completely remove it.
Currently there is no layer to support/represent pooling.
Ideally SumPooling would be the most sensible pooling to be included as averaging or maxpooling on spikes is relatively difficult if not impossible.
The Flatten layer should not have a separate input_type field. It is not necessary. The input shape can be determined from the previous nodes. And every layer anyways has a input_type by inheritance.
The pre-commit hooks clash and so you are stuck in a limbo. This is a known issue and there is a known fix to this ie using a --black flag to the docformatter.
The version of docformatter needs to be bumped from the current version for this.
rev: v1.7.5
hooks:
- id: docformatter
args: [--in-place, --black, --wrap-summaries=88, --wrap-descriptions=88]
When trying to generate NIR graph in the paper/02_cnn/mint_sinabs_to_nir.py
, the script fails on trying to save the graph.
The change that needed to be accommodated from sinabs end was to add the input_shape
parameter. I added the parameter and set it to None.
sinabs version: latest commit on the dev/nir
branch.
sinabs-dynapcnn: from pip
(NOTE: These two packages need to be installed fully (pip install .
and not pip install -e .
)and do not work with dev install.)
NIRTorch: latest commit on main branch.
nir: latest commit on paper branch.
I also tried just running the infer_types method and also ran into the same error message. So believe this is the root of the problem.
Currently, NIR has followed the disambiguation principle, which aims to ensure the sufficient specification of the IR so any NIR graph has sufficient information to be evaluated.
However, there are some operations, like convolutions and sum pooling, that effectively operate as a function in the spatial dimensions. That means they are free to work on spatial input of any size, they don't need to specify the shape exactly.
My current preference is to stick to the disambiguation principle, and disallow the "free spatial dimension" for two reasons:
LI
layer. How will we know how many neuron parameters to provide the layer? Should we care?I would suggest we mull over the situation in the thread to elicit any issues and take a careful, deliberate choice. Looking forward to hearing other opinions!
Every outgoing edge must have the same dimensionality as the corresponding incoming edge. We can ensure that by checking the edges when the NIR
object is constructed. Perhaps by __post_init__
?
This line assumes that the product of the operation/indexing is going to be a tensor/array
Line 65 in df2f1fc
However, I don't know how, I end up with lists here and the elements do not have the item
method available. You could just check if the shape
object is an ndarray
| tensor
or it has a the item
attribute before calling it, otherwise just grab the element?
In the Readme the section Frameworks that currently support NIR
and in the doc Supported simulators and hardware
show differences in write/read framework support.
My guess would be that the Readme version is the most updated one ?
The current graph representation assumes a single input/output for a node.
It doesn't have any mechanism to work or address specific inputs/outputs of a given module. This means the input/output nodes of a graph/subgraph have an unclear role/usage. This also renders subgraphs to be of very limited use.
There are two possible solutions to this:
NodeA.O[2] -> NodeB.I[0]
.NodeB(NodeA.O[2], ... ) -> (NodeB.O[0], NodeB.O[1])
Are there any proposals on how to deal with this?
This issue tracks the unification of the version number and the automation of NIR releases.
From @sheiksadique
I am not certain how you typically deal with release management, so I can't comment about its complexity.
But there are essentially two places where the version is maintained still:
1. the **init**.py file 2. git tag/version.
It is therefore possible in principle to have these two things out of sync.
There are essentially three advantages to automate the process of releases.
1. There will be a unique location of version information. 2. The releases themselves can be automatically handled ie you do not need to create a separate release. A simple act of tagging should automate a major/minor release. 3. Any versions that are not explicitly tagged but pushed to the repos can still be published as pre-releases giving users immediate access to this without having to look up things in GitHub.
We should provide a means to extract the shape for each primitive. That includes specifying the dimensionality for the Output
primitive.
One suggestion for the implementation could be a shape
function or similar, that's calculated when constructing the Python object.
In our discussions, the idea came up to include metadata into NIR nodes that would help developers implement NIR nodes on their respective platform.
As an example, @bvogginger pointed out in #63 that we should not include the reset mechanism into the NIR node definition itself, since the reset mechanism (subtractive, reset to zero, etc) is specific to digital implementations and has no place in the ODE formulation of the node. However, to make sure that NIR graphs commute across platforms, developers need to know what reset mechanism was originally implemented for a given NIR graph. Hence, the idea is to include a file of "metadata" into [every | some] NIR nodes that will give such relevant information.
Suggestions for metadata:
In the nodes, all parameters are defined as numpy arrays, however we currently do not use numpy arrays as parameters in the tests.
I propose we should update the tests to also use numpy arrays.
Currently, the neuron models require complete specification of the primitives. For usability, we could consider setting default values for some variables.
For the LIF
spec we could, for instance, set v_threshold=1
, v_leak=0
, and r=1
. Currently,
After the refactoring #78 , the code structure of NIR changed.
Many libraries were using imports such as
from nir import NIRNode
However this fails now as NIRNode is not visible anymore. The solution should be to add it to the list of imports in the __init__.py
that looks something like this:
from nir.node import NIRNode
...
__all__ = ["NIRNode", ...]
This would allow the simpler imports again.
Currently, the input_type and output_type are set in the post_init based on the weight shape which I find misleading:
self.input_type = {"input": np.array(self.weight.shape)[1:]}
self.output_type = {"output": np.array(self.weight.shape)[[0, 2, 3]]}
The input and output type should not reflect the size of the convolutional filters, but the actual size of the input data.
At the moment, every primitive that needs to be added requires three different files to be modified. In particular the read and write methods are simply exceptionally long methods with if else conditions.
The functionality of read write should be moved into the IR definitions so that things are better structured and modularised. This will also make things more scalable.
(We can take inspiration from the memento design pattern)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.