-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fast, Inaccurate Mode #1056
Comments
one thing that should be possible once we have full likelihoods is to then follow the same procedure as CMS by providing "background-integrated" covariance matrices for the various regions. We need some of the error propagation machinery needed also by @alexander-held and @ntadej which needs to go into pyhf but at that point this can be very fast at the cost precisino. |
On Tue, 8 Sep 2020 at 18:51, Lukas ***@***.***> wrote:
Hi @WolfgangWaltenberger <https://github.com/WolfgangWaltenberger>,
one thing that should be possible once we have full likelihoods is to then
follow the same procedure as CMS by providing "background-integrated"
covariance matrices for the various regions. We need some of the error
propagation machinery needed also by @alexander-held
<https://github.com/alexander-held> and @ntadej
<https://github.com/ntadej> which needs to go into pyhf but at that point
this can be very fast at the cost precisino.
By background-integrated covariance matrices, you have in mind one big
Gaussian that "envelops" all nuisances, right? Yes, I was also thinking in
that direction.
In this case it would esssentially be what in CMS is called a simplified
likelihood, discussed e.g. here:
https://inspirehep.net/literature/1694152
It might result in nice spin-offs, like a good initialization algorithm for
e.g. maximizing the full likelihood, etc.
Wolfgang
… —
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1056 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABLSE4YTVERRODW4CDCJOBDSEZOJJANCNFSM4RAHHRQQ>
.
|
Ah Sabine reminded me of one more thing. If you go for this Gaussian-envelope approach, then we strongly recommend that you add a skewness term, or something similar to account for asymmetries. From our experience a Gaussian alone is too crude a model. |
yes it was always the plan that once we have the full likelihoods that the community could/would be able to develop lossy versions of it. To some extent |
Hi, may I ask what "prune" is really doing? Does it literally remove uncertainty contributions (which is not whet we want) or integrate over them? |
The docs on |
OK, thanks. So this is indeed not what we want for a fast mode. Instead of having things trimmed off, we'd want them to be profiled or marginalized over. That goes in the direction of the "Gaussian envelope" Wolfgang was mentioning above, but without loosing relevant asymmetries (i.e. keeping the skew). |
well, of course nuisances, which are really "irrelevant" from a fast mode point of view, may also be removed. I just want to say that it will be nice to go a bit further. |
This feedback and discussion is great so far — so thanks. I think given the timescale that we want to get |
Description
We SModelS kindly wish to propose some kind of a fast mode for pyhf, sacrificing O(10%) accuracy on the likelihoods for increase in speed. We SModelS would be interested in such a feature. And we would hope that use cases other than ours would also benefit from such a mode. We are thinking of use cases where accuracy is not the top priority. Anyways, just a proposal, just think about it! Cheers, Wolfgang, for SModelS
The text was updated successfully, but these errors were encountered: