-
Notifications
You must be signed in to change notification settings - Fork 331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory efficient spike encoding #301
Comments
Good idea.
|
I'm not sure I agree with your second point. We already have two types of Another thing: Pre-computing the spike encoding is typically more efficient in terms of time, but not in terms of memory (as I wrote in the first post). So, there's a trade-off between using an |
You are right, this trade offs is a good option to give the user to choose. On long runs Memory can be in short demand. You can use some buffering mechanism to generate couple of iteration a head of time to save CPU usage and not completely hung the memory. The confusion start with the definition of If the Else, If the purpose of the Generator is creating/generating output once it initialize. Its place is in |
Yeah, that's true, but if a layer connects to a nodes object subclassing In the same way, nodes subclassing
In the above, the Alternatively, we could pass |
You are right, because The point of the framework to be general as possible, but to make sense. Even if the use case like you describe is not make any sense now, in the future, for someone for future use case it will make sense.
This is inconsistent to the definition of I agree that the behavioral that you describe is desirable, but, it can be achieve using the current Node unit description, by feed the desire neuron with In this way, |
One thing we can look towards is something like the AER type representation as input. Such as what is output from the DVS and other sensors like that. At the end of the day that would be equivalent to a sparse array representation. It should be pretty straight forward on the |
Yeah, mostly agree except that PyTorch's support for sparse tensor ops is not super strong. But it would be great to look into this. |
I will look into this. There's some documentation on what exists for sparse and what doesn't exist for sparse. pytorch/pytorch#8853 |
IMO, encoders should not be called on user side, except for specific needs. |
This is exactly my requirement, I need to use temporal encoding scheme for my SNN model to develop a supervised model, but I am unable to find any tool to handle this encoding part. I noticed that there are only three encoders in the repository. |
@mahsa-ebrahimian, please use the encoders in BindsNET to convert static data like image to spike trains using the schemes here. You can find how to use them in the examples. The open issue here is on the efficiency and memory utilization of the encoders only. |
At present, we generate input spikes prior to simulation. This results in tensors of shape
[time, *input_shape]
. Whentime
and / orinput_shape
is large, this uses a lot of memory.What we could do is generate spikes from encodings as-needed during simulation. The most obvious way to implement this, to me, is to create
Nodes
objects which maintain the variables needed to generate spikes according to their encoding function. For example,PoissonNodes
would maintain rate parameters and generate spikes per timestep according to that rate. This would reduce memory usage down from[time, *input_shape]
to just[*input_shape]
, which would be a big win especially for long simulations.The text was updated successfully, but these errors were encountered: