-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reimplement Atlas WPWM and Z 13 TeV TOT #2207
Conversation
@@ -0,0 +1,25 @@ | |||
bins: | |||
- k1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove the k1.
""" | ||
kin = [] | ||
|
||
mw2 = 80.385**2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please, put this as a module (or even at the filter_utils level) variable, MW2 = ... so that it can be modified for many datasets at once.
Sorry, why "special"? What's wrong with |
Dear @comane as far as I understand, with this procedure you do take into account the correlation between the W+ and W- cross sections, but you do not include the correlation with the Z cross section. My suggestion (and I think that this is what was done in the old buildmaster implementation) is to extend your procedure to syst_cov = corr_matrix * np.outer(np.array([syst_wm, syst_wp, syst_z]), np.array([syst_wm, syst_wp, syst_z])) This will construct the total systematic covariance matrix for |
956a26f
to
5ecf049
Compare
@enocera thank you for the detailed explanation. |
Hi @comane, is this PR ready for review? |
Yes it is. As written in the PR comment, the new implementation agrees with the old one apart for the t0 covariance matrices. |
I see. However, I remember a similar situation where the uncertainties were nonetheless multiplicative. In other words, expressing the uncertainties in absolute value is not a sufficient condition for them to be additive. I might be wrong though, so I summon @enocera. |
Yes, this might be the case. Perhaps @enocera remembers this detail! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I left a few comments and some questions that we can discuss.
nnpdf_data/nnpdf_data/commondata/ATLAS_WPWM_13TEV/metadata.yaml
Outdated
Show resolved
Hide resolved
nnpdf_data/nnpdf_data/commondata/ATLAS_WPWM_13TEV/metadata.yaml
Outdated
Show resolved
Hide resolved
|
||
yaml.add_representer(float, prettify_float) | ||
|
||
MZ2 = 91.1876**2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I start wondering whether we should use a common source for these parameters. I usually take min and max values of the mass as indicated in the paper, and then take the mean value. Honestly, I don't know what is better. It's not a big issue though, as this is used only for the x,Q2 map. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case I literally just took the value that was being used in the legacy version.
But this is a good point. I agree with you that it would be nice to collect this values / constants in a file (eg filter_utils) so as to use them in a consistent way..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I thought I had already replied to this.
I think its a good idea in principle.
Here I really just took the value that was already used in the legacy version for the kinematics
bec580f
to
bebefea
Compare
General considerations. The way in which an experimental uncertainty is presented (absolute or percentage) is not an indication of its additive or multiplicative nature. Additive uncertainties can be presented as absolute or percentage values, not necessarily only as absolute values; and likewise multiplicative uncertainties can be presented as absolute or percentage values, not necessarily only as percentage values. If you are undecided whether an uncertainty is additive or multiplicative, the NNPDF convention is to set it to multiplicative. The reason being that it is worse to treat as additive an uncertainty which is actually multiplicative than the other way round because of the D'Agostini bias. If you have artificial systematic uncertainties, determined from the decomposition of a covariance matrix, these must instead be additive, because otherwise the original covariance matrix cannot be reproduced because of the t0 prescription. Specific considerations. As far as I understand, in the data set under discussion you have three uncertainties: the statistical uncertainty (stat) a correlated systematic uncertainty (sys_1) and the luminosity uncertainty (sys_2). The correlated systematic uncertainty must be used with the correlation matrix to generate a covariance matrix, which is decomposed into additive artificial systematic uncertainties, as I explained above; the luminosity uncertainty should be implemented separately, treated as multiplicative (100%) correlated. I understand that this is what you did. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I think this is ready to merge.
Implementation agrees with legacy (t0-covmat included):
Z TOT Benchmarks
(this branch) https://vp.nnpdf.science/edlFDxSrRoK3stoHz-HtdQ==
(master) https://vp.nnpdf.science/JhZHlSMRQMigrmyn1OOcLg==
WPWM TOT Benchmarks
(this branch) https://vp.nnpdf.science/HM0ylMDJSvGocbHKjowGIQ==
(master) https://vp.nnpdf.science/MX4e1EQaQrafwPSwXihhTw==