-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Grid::evolve
, a better version of Grid::convolute_eko
#184
Conversation
Codecov ReportBase: 91.22% // Head: 91.38% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #184 +/- ##
==========================================
+ Coverage 91.22% 91.38% +0.15%
==========================================
Files 47 49 +2
Lines 6920 7369 +449
==========================================
+ Hits 6313 6734 +421
- Misses 607 635 +28
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
To give you a starting point, Lines 2066 to 2076 in 58e1132
is where the actual convolution happens, everything else is just bookkeeping. In line 2073 the values |
The amount of code is still massive (~300 lines), so can you try to split it a bit? e.g.
|
@felixhekhorn yes, that'll definitely happen. I will probably write a different method for DIS, because the core operation can be optimized differently as well. |
Maybe this is also the best option, but then I'd definitely complete double hadronic first. The moment we have, we can check how many "units" you have, and try to split them in functions. Then, we may reuse the units to build a completely separate functions for DIS (and maybe this will improve, since we don't have to bloat one function to accommodate for both). For splitting, I'd try to look for more sensible units, rather than just cut pieces in "pre", "during", "post". Also because you might end up passing everything you create in one function to another function. I would also consider splitting the whole business in a different file ( |
Agreed.
Yes that sounds like good idea, |
Commit 58e8321 fixes the bug I mentioned above and now I get the same results as produced by In any case the numbers of |
Wow! that sounds almost to good to be true |
This process might be a bit of a special case, because it has very few initial-states; but on the other hand here are some good reasons that I long suspected:
|
@alecandido @felixhekhorn |
If the subset of used factorization scales was smaller than the set of all factorization scales provided by the operator there would possibly have been a wrong evolution
@andreab1997 you can have a look at the workflow at What @cschwan has done from f91e7ca to 8d58423, is more or less what I was proposing to do in pineko:
I believe we can mirror this on that side :) The only improvement I would do, is to add to the repository a shell script for the download (containing what now is in the workflow download step), in such a way that is simple to reproduce the workflow environment locally (without the need of copy-pasting from the workflow). |
I guess that would need to be fixed before merging ... as for the DIS grids listed above: it is not sufficient to just look at the difference between grid and FK table, since the observables also span non-physical range. More precisely they include points below That being said, I can do the JOIN also by hand and most of them are indeed alright (so matching where they should and differences where we expect them). However, looking at output from @cschwan>>> NUTEV_CC_NB_FE_SIGMARED.tar b FkTable Grid rel. diff --+-----------+-----------+------------- 0 3.6074998e0 5.8221893e0 6.1391255e-1 1 7.7797554e0 9.9638287e0 2.8073804e-1 2 1.1450347e1 1.3461273e1 1.7562146e-1 3 9.7389131e0 1.0754171e1 1.0424760e-1 4 8.8859954e0 8.2236821e0 -7.4534507e-2 5 2.0471413e1 1.9412808e1 -5.1711398e-2 6 1.7321241e1 1.6906333e1 -2.3953712e-2 7 1.7655144e1 1.7655176e1 1.8078417e-6 8 1.0827803e1 1.0827640e1 -1.5123138e-5 9 7.1845983e0 7.1846006e0 3.2141418e-7 10 2.3103035e1 2.3102408e1 -2.7160240e-5 11 1.3527970e1 1.3527618e1 -2.6071438e-5 https://github.com/NNPDF/runcards/blob/7f11afce4242791acad47d4c7be393e629b5121d/pinecards/NUTEV_CC_NB_FE_SIGMARED/observable.yaml#L61observables: XSNUTEVCC_charm: - Q2: 0.7648485767999998 x: 0.015 y: 0.349 - Q2: 2.1415760150399996 x: 0.042 y: 0.349 - Q2: 3.671273168639999 x: 0.072 y: 0.349 - Q2: 5.812849183679999 x: 0.114 y: 0.349 - Q2: 10.554910359839997 x: 0.207 y: 0.349 - Q2: 1.2689035127999997 x: 0.015 y: 0.579 - Q2: 3.5529298358399997 x: 0.042 y: 0.579 - Q2: 6.090736861439998 x: 0.072 y: 0.579 - Q2: 9.643666697279999 x: 0.114 y: 0.579 - Q2: 17.510868476639995 x: 0.207 y: 0.579 - Q2: 1.7006375232 x: 0.015 y: 0.776 - Q2: 4.76178506496 x: 0.042 y: 0.776 - Q2: 8.163060111359998 x: 0.072 y: 0.776 - Q2: 12.92484517632 x: 0.114 y: 0.776 - Q2: 23.468797820159995 x: 0.207 y: 0.776 - Q2: 1.4116504163999999 x: 0.015 y: 0.349 - Q2: 3.95262116592 x: 0.042 y: 0.349 |
That makes a lot of sense. For the bin limits see test-output.txt. It seems that every bin with
I'm not sure which version you'd like to know, I simply copied the files from theory 208. |
@andreab1997 @alecandido @felixhekhorn I consider this done and ready for merging into We've got integration tests and I tested theory 208 completely. Do you think there's anything missing? |
What's clearly missing is that we don't test for evolved and scale-varied |
In newer EKO file formats the strong coupling is not stored anymore
The reason why the comparison fails is that there are bins with a factorization scale below the PDFs fitting scale
I merged this branch into master. We'll need further tests, but for them I'm going to create a new Issue. |
We will need a release, and maybe also a deprecation plan for |
I'll remove |
Ok, fine. But I'd like ot have another intermediate release before v0.6.0, because we need to upgrade in Pineko, and it would be handy to be able to test also there with both available. |
I agree, I'm working on it right now! |
@andreab1997 @alecandido @felixhekhorn
This pull request adds the method
Grid::evolve
, which is going to replaceGrid::convolute_eko
. The new method should be primarily faster, but (in the end) also easier to read and to maintain. It should also avoid usingunwrap
s and index access in general.I also replaced
EkoInfo
withOperatorInfo
(name is up for debate), in which I removed the use ofGridAxes
and mainly change the naming of the variables. Instead of naming some objects with 'target' and 'source', which is terminology that's used differently in EKO and inGrid::convolute_eko
I instead use0
for objects defined in theFkTable
/at fitting scale and1
for objects defined inGrid
/at process scale(s).There's also a test added, which will not work since I didn't want to upload the EKO and the grid. If you'd like to test simply place the grid
LHCB_WP_8TEV.pineappl.lz4
and the EKO into PineAPPL's main directory. You must untar the operator and also uncompressalphas.npy
andoperators.npy
.Right now there are a few limitations which are documented in TODOs. I will remove them soon, of course:
If you have comments I'm happy to hear them!