-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
C++ support for LFPy methods. #1115
Comments
Hi @halfflat , this all sounds good as a first step. I started a simple demonstration notebook here: https://github.com/LFPy/LFPykit/blob/arborexample/examples/Example_Arbor.ipynb In terms of option 1, it's good as a first start, but I can imagine that memory consumption will be quite problematic for large(r) networks or even single-cell simulations if the simulation duration is long. Let's assume double precision, 1000 compartments, 20kHz sampling frequency, 60 s of biological time resulting in almost (64bit * 1000 * 20000s^-1 * 60s / (8bit/B * 1024^3B/GB)=) 8.94GB of consumed memory just to record the currents. In the present version, is it possible to repeatedly advance the simulation by say 1s, perform the multiplication, reduce, save the data and reset the current recording? |
We have a ticket open for restarting simulations: #873. Currently it does not work, but it really has to. A related open ticket is #1232. I realized my comment above should have been atttached to #1036 rather than this issue. We can keep this one open until more robust implementations are available. |
I'm closing this as resolved via Option 1; new issue #1385 is the placeholder for in-library support for LFP calculations in a dedicated sampler, corresponding to Option 2. |
LFPy will be factoring out their response calculation methods, and this will enable LFPy to pass linear response matrices through to Arbor for library-side computation of local field potentials. See discussion: LFPy #187.
From Arbor's point of view, the workflow would be something like:
Regarding the summation of field contributions from multiple cells, there are a few options:
Option 1 allows for partition-independent reductions, allowing for in-principle reproducibility across different multiprocess scenarios, but will consume the most memory during simulation.
Options 2 and 3 reduce the memory use, at the expense of robust reproducibility. Option 3 allows online streaming of results at each time step, minimizing storage, but with greater complexity in implementation (it would require global communication, probably managed by an additional thread).
There's no reason why we can't support multiple ways of collating and reducing the data, but in the first instance I would go with Option 1, as it is the easiest and supports reproducibility.
The C++ support will comprise the specialized sampler call-back + some measure of documentation. Python support would follow as a different tranche of work, as would more sophisticated versions of the C++ support (i.e. Options 2 and/or 3 above).
The text was updated successfully, but these errors were encountered: