-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recover numba #108
Recover numba #108
Conversation
Okay. It's working so fine for me. |
I think that would work - you need to make that option accessible from the outside though I still think this parallelization is a good a idea - since it is an immediate help for us developers as it speeds up the computation of a single eko (as we typically do), instead #105 is for users who typically request several Q2. This can addressed here or back in #83 where it was introduced |
Shall I add an option to the operator card? something like: |
maybe yes - the name has to be more specific though I think (or maybe it's just fine, since as said #105 plays on a different level and is parallelize on the outside) ... or maybe pass directly the number of processes? with 0=all or something like this (and -1 = all-1) |
The most important question: is the cache still working? |
Definitely, I'd put in the runcard the number of processor, this is something you always want to control. In #105, I started splitting the operator card in several chunks, so this option can now be scoped in |
Actually, there will be no clash with parallelization itself w.r.t. to #105 and related, since Jobs management should be done:
We have to mandatory split the jobs in |
Yes, I think so - but please do a simple run yourself (not that this has an interplay with the parallelization, since it seems you need to compile numpy in numba ( |
Codecov Report
@@ Coverage Diff @@
## feature/N3LO_matching #108 +/- ##
=======================================================
Coverage 100.00% 100.00%
=======================================================
Files 59 59
Lines 2956 2965 +9
=======================================================
+ Hits 2956 2965 +9
Flags with carried forward coverage won't be shown. Click here to find out more.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy to merge.
(All tests run on single core, while benchmarks are still in parallel)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nothing but signatures and multiprocessing.Pool
related stuffs. I'm happy to merge as well.
In case we'll want the signatures back, they are still in git
anyhow, so we can recover them with reasonable to little effort in the next future (more changes more effort, of course, but even less likely we'd like to change).
Not that we should, I believe you this is working as expected, just to point that we could.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two minor things - else we can merge and move on ...
Here I want to recover the "true" numba behaviour, i.e. compilation just-in-time (and not ahead-of-time)
import eko
is immediate and instead running, e.g. the LHA benchmark, has some initialization costPlease have a look and then we can merge back to #83
*actually, they are not - because https://github.com/N3PDF/eko/blob/606138fcfc31e0364457ab49e9e9557597e49a42/src/eko/evolution_operator/__init__.py#L243 forces me to do a manual
CTRL+C
in the test shell - and hence I strongly recommend to revert thatPS: I think we introduced the signatures because at some point we had problems with types, i.e. we were not sufficiently casting - hopefully this is gone now ...