-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discrepancy in voltage and number of spikes in basic netpyne model when using NEURON v8.0.2 vs v8.1.0 #1764
Comments
I was able to reproduce the same behaviour with a single hh-cell and no connections/stimuli |
Also reproduced with pure NEURON example: https://colab.research.google.com/drive/1RHNgLg9ZKauTmPGG3P11erccqWuTb3_T?usp=sharing |
I'm not able to reproduce a difference with the above shared example.
In each case I copied the main cell into test.py and copied the following cell to the terminal after starting
and
|
I created two separate online Google Colabs for each version, and they show the discrepancy (just need to "Run all" in each): NEURON v8.0.2: NEURON v8.1.0: |
For what it's worth, when using colab, I see the same thing as Salvador, with NEURON v8.0.2:
NEURON v8.1.0:
|
It appears to me that this version was built with the coreneuron change to hh.mod (no TABLE and GLOBALS changed to RANGE). Was CoreNEURON part of this distribution? @pramodk It occur to me that one solution to this is to not change the mod files for NEURON (where the TABLEs can be turned off) but only for CoreNEURON. It is still easy to validate CoreNEURON by comparing with NEURON with its TABLEs turned off. A la legacy units, for NEURON we could turn off TABLE by default. |
Oh yes! It should have come to my mind earlier! As we are also distributing coreneuron within the wheels, It's true that results will be numerically different with v8.1.0 for If we consider that this discrepancy in the distribution needs to be urgently fixed: TABLE statements needs to be disabled only for GPU execution with coreneuron. So we could re-create all CPU distributions without disabling TABLE statements (but this means a new release!). |
So what is the conclusion? Is there some flag we can set in NEURON v8.1.0 to reproduce the results of v8.0.2 ? |
@salvadord : did you try setting |
thanks pramod, we will try this. But is the plan to fix this discrepancy in the next NEURON releases, or should we change our netpyne tests to reflect to new expected outputs? |
It is hard to predict whether CoreNEURON will ever be extended to handle TABLE statements. It is a lot of effort to little meaningful purpose since TABLE is unlikely to improve performance. And If this last phrase in incorrect it can't be demonstrated until the extension is complete and NMODL has been changed to to have a bit of AI introduced to decide whether to automatically generate tables that incorporate dt solely for the fixed step method in order to optimize the performance of the One possibility for the short term is to adopt the strategy of no longer changing hh.mod when configuring with |
For the current NEURON release, I would change the your test results so that |
Today, we can create all cpu binary installers with TABLE statements ON and avoid the discrepancy reported. The issue/incompatibility exist only for GPU execution. This issue is tagged in 8.2 milestone but 8.2 we plan to release soon. So not sure if actual support for TABLE on GPUs will go there. But at least we can decide to enable TABLE statements in CPU builds and disable only if GPU support is enabled. |
Would this mean two different people both running 8.2 could get different results depending on if their 8.2 installation supported the GPU? (Or would the difference only arise if the GPU was selected for computation?) |
Sorry, I missed this @ramcdougal. Unfortunately, the answer is I was revisiting this issue today to "fix" the discrepancy but wondering whether the strategy I mentioned earlier is good option or not. Below are my thoughts:
The change done in the previous release was not wrong but missed from our documentation/changelog. So I am thinking:
Thoughts @ramcdougal @nrnhines ? |
My opinion is to do something analogous to the units change. In the TABLE case, in NEURON those are ON by default but they can be turned off. The usetable_mechname global variables should be part of the globals.dat file. If one is using the GPU and a usetable_mechname ==1 , then an error message can be generated (I presume coreneuron cpu allows TABLE, if I'm mistaken then generate the error in that case as well). The classic hh.mod can be left as is and mod2c/NMODL modified to just not use it for GPU. Writing to GLOBAL is still a problem, mostly because auto conversion to RANGE for CoreNEURON would give different sizes for the data array per instance. But that is overcomable with some tedious programming. |
…ity) - We were commenting TABLE statement from hh.mod as TABLE statement was not supported in mod2c. - NMODL on CoreNEURON side supports TABLE statements also on GPU side. - Hence, we can remove the CMake toggling logic for TABLE. - Also update the references for tests fixes #1764
…ity) - We were commenting TABLE statement from hh.mod as TABLE statement was not supported in mod2c. - NMODL on CoreNEURON side supports TABLE statements also on GPU side. - Hence, we can remove the CMake toggling logic for TABLE. - Also update the references for tests fixes #1764
…ity) - We were commenting TABLE statement from hh.mod as TABLE statement was not supported in mod2c. - NMODL on CoreNEURON side supports TABLE statements also on GPU side. - Hence, we can remove the CMake toggling logic for TABLE. - Also update the references for tests fixes #1764
This took 2.5 years to close, but I am happy to mention that we have now restored old behavior i.e. |
- We were commenting TABLE statement from hh.mod as TABLE statement was not supported in mod2c. - NMODL on CoreNEURON side supports TABLE statements also on GPU side. - Hence, we can remove the CMake toggling logic for TABLE. - Also update the references for tests in ringtest and testcorenrn repos fixes #1764
we just noticed that the netpyne tests are failing with the new 8.1.0 release, since the models (even very simple ones) are producing a different number of spikes (compared to NEURON v8.0.2) … any idea why idea this might be happening? thanks
this is the simplest that failed: https://github.com/suny-downstate-medical-center/netpyne/blob/development/doc/source/code/tut2.py
Mismatch: model tut2 numSpikes is 928 but expected value is 931
the netpyne network representation is identical in both (cell properties, conns, stims) ... checked the NEURON side just in case via for c in list(h.List('NetCon')): print(c.srcgid(), c.postseg(), c.delay, c.weight[0]), and they are also identical
update: “Voltage traces of cells has very tiny difference across the versions, it’s of order 1e-4 or 1e-5. This divergence starts right away, from the first time step, and it’s not accumulating with the course of time, it stays roughly the same. So the difference in spikes is definitely due to meeting or not meeting the threshold value for spike detection.
There are 4 cells that has this mismatch of spikes number between version, and here is the trace for one of them (8.0.2 vs 8.1.0)”
example divergence of values:
ramcdougal 2:55 PM
that's a huge divergence. YOu shouldn't see that kind of difference in 1 time step
billl 2:58 PM
seems like at 1 time step has to either be choice of integrator or a mystery event on the initial queue
ramcdougal 2:59 PM
so then you ought to be able to make a model with just cell 1?
billl 2:59 PM
but nuisance to redo like that in netpyneland
but after run 1st step can look at stuff for differences
has NMODL compiler been changed to the new one now?
ramcdougal 3:01 PM
but even in netpyne land there is presumably no input yet?
pretty sure no change to nrnivmodl yet
billl 3:02 PM
can have stuff for delivery at t=0
salvadord 3:11 PM
will ask him to try with single cell and check
The text was updated successfully, but these errors were encountered: