Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

C++ support for LFPy methods. #1115

Closed
halfflat opened this issue Aug 20, 2020 · 4 comments
Closed

C++ support for LFPy methods. #1115

halfflat opened this issue Aug 20, 2020 · 4 comments
Assignees
Labels
co-sim Related to co-simulation work / SGA3 T5.5 enhancement

Comments

@halfflat
Copy link
Contributor

LFPy will be factoring out their response calculation methods, and this will enable LFPy to pass linear response matrices through to Arbor for library-side computation of local field potentials. See discussion: LFPy #187.

From Arbor's point of view, the workflow would be something like:

  1. Relevant cells are given whole-cell-transmembrane-current probes.
  2. Probe metadata is extracted from simulator (see Easier user access to probe metadata #1079 / Retrieve probe metadata from simulator object. #1101) and given to LFPy to compute response vectors for each electrode.
  3. Response matrix is used to construct a specialized sampler callback.
  4. Simulation is run.
  5. Field information is retrieved from callback.

Regarding the summation of field contributions from multiple cells, there are a few options:

  1. The callback maintains field contributions for each cell and electrode and sample time; reduction is only performed once simulation is over.
  2. The callback performs reductions across all local cells; reduction across remote cells is performed once simulation is over.
  3. Global reductions are performed every integration interval; no further reduction is required once simulation is done.

Option 1 allows for partition-independent reductions, allowing for in-principle reproducibility across different multiprocess scenarios, but will consume the most memory during simulation.

Options 2 and 3 reduce the memory use, at the expense of robust reproducibility. Option 3 allows online streaming of results at each time step, minimizing storage, but with greater complexity in implementation (it would require global communication, probably managed by an additional thread).

There's no reason why we can't support multiple ways of collating and reducing the data, but in the first instance I would go with Option 1, as it is the easiest and supports reproducibility.

The C++ support will comprise the specialized sampler call-back + some measure of documentation. Python support would follow as a different tranche of work, as would more sophisticated versions of the C++ support (i.e. Options 2 and/or 3 above).

@halfflat halfflat added enhancement co-sim Related to co-simulation work / SGA3 T5.5 labels Aug 20, 2020
@halfflat halfflat self-assigned this Aug 20, 2020
@halfflat
Copy link
Contributor Author

halfflat commented Nov 20, 2020

In the first instance, the Python interfaces provided by #1225 and #1250 should suffice for an implementation of Option 1. More elaborate implementations can be postponed!

The next step will be making a Python version of the LFP demo that uses LFPykit. That should be sufficient to close the issue.

@espenhgn
Copy link
Collaborator

espenhgn commented Nov 22, 2020

Hi @halfflat , this all sounds good as a first step. I started a simple demonstration notebook here: https://github.com/LFPy/LFPykit/blob/arborexample/examples/Example_Arbor.ipynb
The example should replicate the corresponding notebooks which utilize NEURON and LFPy respectively (just a simple passive stick with sinusoid synaptic input at one end). As we discussed previously some additional steps may be required to account for variable diameters across individual CVs (see https://github.com/LFPy/LFPykit/blob/master/examples/Example_LFPy_pt3d.ipynb). I'm only a bit concerned about our usual assumption of constant current source density along line segments if their diameter vary, given analytically as
i_m = 1/r_i \partial^2 V(x,t)/\partial x^2 (eq. 10 in https://doi.org/10.1371/journal.pcbi.1003928)

In terms of option 1, it's good as a first start, but I can imagine that memory consumption will be quite problematic for large(r) networks or even single-cell simulations if the simulation duration is long. Let's assume double precision, 1000 compartments, 20kHz sampling frequency, 60 s of biological time resulting in almost (64bit * 1000 * 20000s^-1 * 60s / (8bit/B * 1024^3B/GB)=) 8.94GB of consumed memory just to record the currents.

In the present version, is it possible to repeatedly advance the simulation by say 1s, perform the multiplication, reduce, save the data and reset the current recording?

@halfflat
Copy link
Contributor Author

We have a ticket open for restarting simulations: #873. Currently it does not work, but it really has to. A related open ticket is #1232.

I realized my comment above should have been atttached to #1036 rather than this issue. We can keep this one open until more robust implementations are available.

@halfflat
Copy link
Contributor Author

I'm closing this as resolved via Option 1; new issue #1385 is the placeholder for in-library support for LFP calculations in a dedicated sampler, corresponding to Option 2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co-sim Related to co-simulation work / SGA3 T5.5 enhancement
Projects
None yet
Development

No branches or pull requests

2 participants