GithubHelp home page GithubHelp logo

Comments (7)

rafaelblevin821 avatar rafaelblevin821 commented on May 28, 2024 1

Well, to the best of my understanding, the SingleEncoder works as an intensity filter, generating single spikes for the top sparsity fraction of input feature intensities.
All the selected neurons fire a spike at t=0, and there is no differentiation in spike timings based on the input intensities.
Maybe one of the Bindsnet team can clarify? @Hananel-Hazan ?

from bindsnet.

rafaelblevin821 avatar rafaelblevin821 commented on May 28, 2024

Just at a quick glance, the problem might be with 'time_step = 256'.
Bindsnet uses 'time' as stimulation time and time_step as 'dt', which is usually set to 1 or a multiple of 10. i.e. dt=1 or dt=10.
Also, this line looks like it inverts the 'timing': timing[:, 0] = time_step - timing[:, 0] ... if you want to keep the original timing data, you should remove this line.
Try this instead, not sure if it works as you intended...

from bindsnet.datasets import MNIST
from torchvision import transforms
from bindsnet.encoding import SingleEncoder
import os
import torch
import matplotlib.pyplot as plt

time = 500

dataset = MNIST(
    SingleEncoder(time=time, dt=1.0, sparsity=0.5),
    None,
    root=os.path.join("data", "mnist"),
    download=True,
    train=False,
    transform=transforms.Compose(
        [
            transforms.ToTensor(),
        ]
    ),
)

dataloader = iter(
    torch.utils.data.DataLoader(
        dataset,
        batch_size=1,
        shuffle=False,
        pin_memory=False,
    )
)

img = dataloader.__next__()
encoded = img["encoded_image"].squeeze()
timing = torch.nonzero(encoded)

fig = plt.figure()
gs = fig.add_gridspec(1, 1, hspace=0, wspace=0)
axarr = gs.subplots(sharex="col", sharey="row")
reconstructed = torch.zeros(encoded.shape[1:])

for i, ind in enumerate(timing):
    reconstructed[ind[1], ind[2]] = 1 - (ind[0] / time)

axarr.imshow(
    reconstructed,
    cmap="gray_r",
    interpolation="nearest",
    vmin=0,
    vmax=1,
)
for ax in fig.get_axes():
    ax.label_outer()
plt.show(block=True)

from bindsnet.

ThomasFirmin avatar ThomasFirmin commented on May 28, 2024

Thank you for your answer, and improvement of the code. Sorry for the ambiguity between time and time step.

The code given was just to illustrate what the documentation says : “Spike occurs earlier if the intensity of the input feature is higher.”. Actually, the line timing[:, 0] = time_step - timing[:, 0] and the following are not necessary to catch the issue. I've initially experienced the problem by passing encoded data to a network.

The issue here, is that all spikes occur at t=0 regardless of the intensity of the pixel. Concretely, the outputs of single in encoding.py, is a tensor s of shape [time, n_1, ..., n_k], and all spikes are in s[0, :, :, ..., :]. One can compare, outputs from SingleEncoder and RankOrderEncoder, which are of course different algorithms, but both are temporal coding.

Here, torch.nonzero(encoded) helps to emphasize what I am trying to explain. In my case, using SingleEncoder, a sample of the outputs from torch.nonzero(encoded) looks like this:

tensor([[ 0,  7,  6],
        [ 0,  7,  7],
        [ 0,  7,  8],
        [ 0,  7,  9],
        ...
        [ 0, 26, 13]])

One can see that there is no spike when t>0.

While, by using RankOrderEncoder:

tensor([[  1,   8,   7],
        [  1,   8,   8],
        ...
        [  2,   7,   7],
        ...
        [  3,   7,   8],
        ...
        [166,  20,  12]])

from bindsnet.

rafaelblevin821 avatar rafaelblevin821 commented on May 28, 2024

@ThomasFirmin I understand now. Yes, it looks like you are correct. It looks like the encoder only generates spikes at t=0.

If I am correct, that is how it is supposed to work though, as it is an encoder based on sparsity. The encoder seems to generate a single spike for each neuron based on the sparsity threshold. Neurons with input intensities higher than the quantile threshold will fire a spike at t=0. No spikes will be generated if no input intensities are above the threshold.

You see all spikes at t=0 because the sparsity value is set to 0.5, meaning only the top 50% of the input feature intensities will generate a spike at t=0.

If you want to see spikes at different time steps based on the input intensities, you can use one of the other bindsnet encoders instead.

from bindsnet.

rafaelblevin821 avatar rafaelblevin821 commented on May 28, 2024

... So it is not an issue with the code, it is working as it was intended.

from bindsnet.

ThomasFirmin avatar ThomasFirmin commented on May 28, 2024

Well, if it's working as intended, then I was confused by the documentation, sorry about that. I thought that the sparsity was an intensity filter, and the non-silent features were converted into spikes that occur earlier or later depending on their intensities.

I thought it was a sort of TTFS as described in: D. Auge, J. Hille, E. Mueller, and A. Knoll, “A Survey of Encoding Techniques for Signal Processing in Spiking Neural Networks,” Neural Process Lett, Jul. 2021, doi: [10.1007/s11063-021-10562-2].

from bindsnet.

Hananel-Hazan avatar Hananel-Hazan commented on May 28, 2024

@rafaelblevin821 is correct, however, you can also use this function as a TTFS if the function get datum that contain time. for example:

from bindsnet.encoding import single
import torch

time = 50

inp = torch.rand((4,5), device="cpu") * 100
sgl = single(datum=inp, time=time, dt=1.0, sparsity=0.5, device="cpu")
print(inp)
print(sgl[0,:,:])

output:

tensor([[83.1882, 17.0902, 99.7389, 14.8277, 60.3363],
        [24.5327, 84.2849, 63.5109,  8.1566, 94.9664],
        [ 7.1081, 87.9286,  7.7908, 92.9420, 78.5084],
        [23.9009, 75.7706, 27.6825, 31.8233, 89.7478]])

tensor([[1, 0, 1, 0, 0],
        [0, 1, 1, 0, 1],
        [0, 1, 0, 1, 1],
        [0, 1, 0, 0, 1]], dtype=torch.uint8)

If you have any suggestions on how to improve the function documentation to clear this confusion it will be greatly appreciated.

I will close this issue for now, since its not an issue with the code, but feel free to discuss things if needed.

from bindsnet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.