Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discrepancy in voltage and number of spikes in basic netpyne model when using NEURON v8.0.2 vs v8.1.0 #1764

Closed
salvadord opened this issue Apr 5, 2022 · 18 comments · Fixed by #3093
Labels
Milestone

Comments

@salvadord
Copy link
Member

we just noticed that the netpyne tests are failing with the new 8.1.0 release, since the models (even very simple ones) are producing a different number of spikes (compared to NEURON v8.0.2) … any idea why idea this might be happening? thanks

this is the simplest that failed: https://github.com/suny-downstate-medical-center/netpyne/blob/development/doc/source/code/tut2.py
Mismatch: model tut2 numSpikes is 928 but expected value is 931

the netpyne network representation is identical in both (cell properties, conns, stims) ... checked the NEURON side just in case via for c in list(h.List('NetCon')): print(c.srcgid(), c.postseg(), c.delay, c.weight[0]), and they are also identical

update: “Voltage traces of cells has very tiny difference across the versions, it’s of order 1e-4 or 1e-5. This divergence starts right away, from the first time step, and it’s not accumulating with the course of time, it stays roughly the same. So the difference in spikes is definitely due to meeting or not meeting the threshold value for spike detection.
There are 4 cells that has this mismatch of spikes number between version, and here is the trace for one of them (8.0.2 vs 8.1.0)”

example divergence of values:
image

ramcdougal 2:55 PM
that's a huge divergence. YOu shouldn't see that kind of difference in 1 time step

billl 2:58 PM
seems like at 1 time step has to either be choice of integrator or a mystery event on the initial queue

ramcdougal 2:59 PM
so then you ought to be able to make a model with just cell 1?

billl 2:59 PM
but nuisance to redo like that in netpyneland
but after run 1st step can look at stuff for differences
has NMODL compiler been changed to the new one now?

ramcdougal 3:01 PM
but even in netpyne land there is presumably no input yet?
pretty sure no change to nrnivmodl yet

billl 3:02 PM
can have stuff for delivery at t=0

salvadord 3:11 PM
will ask him to try with single cell and check

@salvadord salvadord added the bug label Apr 5, 2022
@vvbragin
Copy link

vvbragin commented Apr 6, 2022

I was able to reproduce the same behaviour with a single hh-cell and no connections/stimuli
https://colab.research.google.com/drive/1vuY7e1oORvzgzLG1AfR_nT9Trt-gneIK?usp=sharing

@salvadord
Copy link
Member Author

Also reproduced with pure NEURON example: https://colab.research.google.com/drive/1RHNgLg9ZKauTmPGG3P11erccqWuTb3_T?usp=sharing

@nrnhines
Copy link
Member

nrnhines commented Apr 6, 2022

I'm not able to reproduce a difference with the above shared example.

NEURON v8.0.0:
[-65.         -65.00369252 -65.00738254 ... -69.86069991 -69.86092456
 -69.86114907]

NEURON v8.1.0:
[-65.         -65.00369252 -65.00738254 ... -69.86069991 -69.86092456
 -69.86114907]

In each case I copied the main cell into test.py and copied the following cell to the terminal after starting python -i test.py
On your side, I would check that

>>> h.nrnunit_use_legacy()
0.0

and

>>> h.usetable_hh
1.0

@salvadord
Copy link
Member Author

I created two separate online Google Colabs for each version, and they show the discrepancy (just need to "Run all" in each):

NEURON v8.0.2:
[-65. -65.00369252 -65.00738254 ... -69.86069991 -69.86092456
-69.86114907]

NEURON v8.1.0:
[-65. -65.00369252 -65.00738254 ... -69.86087617 -69.86110075
-69.86132518]

@salvadord
Copy link
Member Author

image

@ramcdougal
Copy link
Member

For what it's worth, when using colab, I see the same thing as Salvador, with m, h, and n showing different results as early as the first timestep. Following an h.finitialize(-65); h.fadvance() I see:

NEURON v8.0.2:

soma(0.5).v = -65.0036925214856
soma(0.5).hh.m = 0.05293029486077309
soma(0.5).hh.h = 0.5961211267356088
soma(0.5).hh.n = 0.3176766574927593 

NEURON v8.1.0:

soma(0.5).v = -65.0036925214856
soma(0.5).hh.m = 0.052930176008955186
soma(0.5).hh.h = 0.5961211320206233
soma(0.5).hh.n = 0.3176766555121734

@nrnhines
Copy link
Member

nrnhines commented Apr 6, 2022

$ pyenv virtualenv py39-nrn8.1
$ pyenv shell py39-nrn8.1
$ pip install neuron==8.1.0
$ python -i test.py
>>> import numpy as np
>>> import neuron
>>> 
>>> vSoma = np.array(v_vec_soma.to_python())
>>> print(f"\n\n\nNEURON v{neuron.__version__}:\n{vSoma}")



NEURON v8.1.0:
[-65.         -65.00369252 -65.00738254 ... -69.86087617 -69.86110075
 -69.86132518]
>>> h.usetable_hh
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'hoc.HocObject' object has no attribute 'usetable_hh'
>>> 

It appears to me that this version was built with the coreneuron change to hh.mod (no TABLE and GLOBALS changed to RANGE). Was CoreNEURON part of this distribution?

@pramodk It occur to me that one solution to this is to not change the mod files for NEURON (where the TABLEs can be turned off) but only for CoreNEURON. It is still easy to validate CoreNEURON by comparing with NEURON with its TABLEs turned off. A la legacy units, for NEURON we could turn off TABLE by default.

@pramodk
Copy link
Member

pramodk commented Apr 6, 2022

It appears to me that this version was built with the coreneuron change to hh.mod (no TABLE and GLOBALS changed to RANGE). Was CoreNEURON part of this distribution?

Oh yes! It should have come to my mind earlier! As we are also distributing coreneuron within the wheels, TABLE statements are disabled in our Linux/MacOS distributions.

It's true that results will be numerically different with v8.1.0 for hh channel but there is nothing fundamentally wrong. In the next minor/patch release we can re-enable TABLE statements and also implement TABLE statement support for coreneuron gpu.

If we consider that this discrepancy in the distribution needs to be urgently fixed: TABLE statements needs to be disabled only for GPU execution with coreneuron. So we could re-create all CPU distributions without disabling TABLE statements (but this means a new release!).

@alexsavulescu alexsavulescu added this to the Release v8.2 milestone Apr 12, 2022
@salvadord
Copy link
Member Author

So what is the conclusion? Is there some flag we can set in NEURON v8.1.0 to reproduce the results of v8.0.2 ?

@pramodk
Copy link
Member

pramodk commented Apr 14, 2022

@salvadord : did you try setting h.usetable_hh to 0? (i.e. disable TABLE statements)

@salvadord
Copy link
Member Author

thanks pramod, we will try this. But is the plan to fix this discrepancy in the next NEURON releases, or should we change our netpyne tests to reflect to new expected outputs?

@nrnhines
Copy link
Member

It is hard to predict whether CoreNEURON will ever be extended to handle TABLE statements. It is a lot of effort to little meaningful purpose since TABLE is unlikely to improve performance. And If this last phrase in incorrect it can't be demonstrated until the extension is complete and NMODL has been changed to to have a bit of AI introduced to decide whether to automatically generate tables that incorporate dt solely for the fixed step method in order to optimize the performance of the cnexp integration method. Perhaps @pramodk can say if there is a time frame for TABLE in CoreNEURON.

One possibility for the short term is to adopt the strategy of no longer changing hh.mod when configuring with -DNRN_ENABLE_CORENRN=ON but modifying mod2c and NMODL to just ignore the TABLE statement and when running (CoreNEURON) raise an error if usetable_hh != 0. The lacuna remains about what to do about changing some GLOBAL to RANGE.

@nrnhines
Copy link
Member

For the current NEURON release, I would change the your test results so that h.usetable_hh=0 results correspond to your new test results.

@pramodk
Copy link
Member

pramodk commented Apr 19, 2022

Perhaps @pramodk can say if there is a time frame for TABLE in CoreNEURON.

Today, we can create all cpu binary installers with TABLE statements ON and avoid the discrepancy reported. The issue/incompatibility exist only for GPU execution.

This issue is tagged in 8.2 milestone but 8.2 we plan to release soon. So not sure if actual support for TABLE on GPUs will go there. But at least we can decide to enable TABLE statements in CPU builds and disable only if GPU support is enabled.

@ramcdougal
Copy link
Member

Would this mean two different people both running 8.2 could get different results depending on if their 8.2 installation supported the GPU? (Or would the difference only arise if the GPU was selected for computation?)

@pramodk
Copy link
Member

pramodk commented May 24, 2022

Would this mean two different people both running 8.2 could get different results depending on if their 8.2 installation supported the GPU?

Sorry, I missed this @ramcdougal. Unfortunately, the answer is yes.

I was revisiting this issue today to "fix" the discrepancy but wondering whether the strategy I mentioned earlier is good option or not. Below are my thoughts:

  • Currently neuron distributions are shipped with coreneuron enabled. And, coreneuron never enables TABLE statement in the hh.mod. This means NEURON can not have a TABLE statement in the hh.mod.
  • I was thinking to make this conditional on GPU build (as mentioned in previous comments), but this will break tests that we use on the coreneuron side:
    • coreneuron internally has datasets for ringtest where hh.mod doesn't contain TABLE
    • if we conditionally enable/disable hh.mod then we need another set of binary test datasets inside coreneuron repo. This will just add too much spaghetti 🍝
  • Also, there is an "issue" that @ramcdougal mentioned above: CPU and GPU builds will have different defaults and hence will produce different results for hh.mod.

The change done in the previous release was not wrong but missed from our documentation/changelog. So I am thinking:

  • it's not worth fixing this (and we don't have time) in the upcoming 8.2.0.
  • for now, we should just add this information in the release notes so that users can update their "baselines" if they have (like netpyne test)
  • we can revisit this in 9.0 and see how TABLE statements should be handled on GPU as well.

Thoughts @ramcdougal @nrnhines ?

@nrnhines
Copy link
Member

My opinion is to do something analogous to the units change. In the TABLE case, in NEURON those are ON by default but they can be turned off. The usetable_mechname global variables should be part of the globals.dat file. If one is using the GPU and a usetable_mechname ==1 , then an error message can be generated (I presume coreneuron cpu allows TABLE, if I'm mistaken then generate the error in that case as well). The classic hh.mod can be left as is and mod2c/NMODL modified to just not use it for GPU. Writing to GLOBAL is still a problem, mostly because auto conversion to RANGE for CoreNEURON would give different sizes for the data array per instance. But that is overcomable with some tedious programming.

pramodk added a commit that referenced this issue Sep 24, 2024
…ity)

- We were commenting TABLE statement from hh.mod as TABLE
  statement was not supported in mod2c.
- NMODL on CoreNEURON side supports TABLE statements also
  on GPU side.
- Hence, we can remove the CMake toggling logic for TABLE.
- Also update the references for tests

fixes #1764
pramodk added a commit that referenced this issue Sep 24, 2024
…ity)

- We were commenting TABLE statement from hh.mod as TABLE
  statement was not supported in mod2c.
- NMODL on CoreNEURON side supports TABLE statements also
  on GPU side.
- Hence, we can remove the CMake toggling logic for TABLE.
- Also update the references for tests

fixes #1764
pramodk added a commit that referenced this issue Sep 24, 2024
…ity)

- We were commenting TABLE statement from hh.mod as TABLE
  statement was not supported in mod2c.
- NMODL on CoreNEURON side supports TABLE statements also
  on GPU side.
- Hence, we can remove the CMake toggling logic for TABLE.
- Also update the references for tests

fixes #1764
@pramodk
Copy link
Member

pramodk commented Sep 25, 2024

This took 2.5 years to close, but I am happy to mention that we have now restored old behavior i.e. TABLE is enabled by default.

JCGoran pushed a commit that referenced this issue Oct 31, 2024
- We were commenting TABLE statement from hh.mod as TABLE
  statement was not supported in mod2c.
- NMODL on CoreNEURON side supports TABLE statements also
  on GPU side.
- Hence, we can remove the CMake toggling logic for TABLE.
- Also update the references for tests in ringtest and testcorenrn repos

fixes #1764
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants