Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MHOU implement scheme B+C #6

Closed
1 of 3 tasks
felixhekhorn opened this issue Feb 19, 2021 · 6 comments · Fixed by #19
Closed
1 of 3 tasks

MHOU implement scheme B+C #6

felixhekhorn opened this issue Feb 19, 2021 · 6 comments · Fixed by #19
Assignees

Comments

@felixhekhorn
Copy link
Contributor

felixhekhorn commented Feb 19, 2021

We should implement schemes B and C of https://inspirehep.net/literature/1741422

  • scheme B: use fact scale variation from evolution (of course ren scale still in processes)
    • fact scale variation should be resummed (exponentiated) or not (expanded) according to the chosen evolution mode, see EKO docs
  • pineko needs to request for scheme B the central scale (and not the shifted one as at the moment of writing is doing); effectively eko is taking fully care of the SV
  • scheme C: both scale variations coming from processes

Both schemes do not involve refitting (because they are defined this way).

@cschwan
Copy link
Contributor

cschwan commented Mar 14, 2022

I hope this is the right place to ask this question: What do we need for a fit with theory uncertainties from the grid side and/or from Pineko and EKO?

If my understanding it correct, lines like this one in apfelcomb were used to approximately reconstruct the scale-varied FK tables. Do we have something similar in EKO or do we not need this?

@alecandido
Copy link
Member

Just to write down what we told yesterday about scale variations reconstruction.

Ren scale variations

If you check https://inspirehep.net/literature/1741422, in chapter 3.2.1 and 3.2.3 it's fully spelled out how to derive ren scale variation for DIS and double hadronic processes, based on the perturbative expansion of coefficient functions and of the beta function of alphas.

This thing is not that hard to reproduce, and we can add on top of PineAPPL: having a grid without renormalization logs, we can fill them algorithmically, based on pure QCD orders.
For mixed EW I guess might be a bit more complicated, but most likely if a single scale is used for the two couplings (not mandatory, but frequent, and we can always restrict to this case) we can rederive an analog formula.

Fact scale variations

This is a bit more complicated, since they are coming from DGLAP (instead of coupling running), thus they have a non-trivial flavor structure.

If my understanding it correct, lines like this one in apfelcomb were used to approximately reconstruct the scale-varied FK tables. Do we have something similar in EKO or do we not need this?

The derivative is used in formula 3.37 of the aforementioned paper, but a derivative in fact scale can be always traded for a derivative in alphas (plus beta coefficients), and a derivative in alphas can always be obtained from the perturbative series, manipulating the series with the same coefficients.
This is what is done in 3.39, and in principle we could use that to obtain fact scale variations for PineAPPL whenever they are lacking.

In practice, here you need not only beta functions series coefficients, but even the series coefficients of anomalous dimensions (and the easiest way to get them at this point is querying EKO), and they mix flavors in a pretty non-trivial way (since they are sensible in evolution basis, but crazy in flavor basis, that is more common for processes).
So, my advice is to give up on reconstruction here, since when using scheme B we won't even use the fact scale logs in the grids, since the variation will affect directly evolution (while for scheme C fact scale logs would be required in the grids).

@felixhekhorn
Copy link
Contributor Author

To do this we need NNPDF/pineappl#133 first

@felixhekhorn
Copy link
Contributor Author

felixhekhorn commented Apr 8, 2022

While working on NNPDF/pineappl#138 I realized, I was too naive on my expectation of SV in pineko:
@alecandido @andreab1997 please take a look below to my consideration and tell me if you agree

Let's take as an example F2(Q2) (all x, z, x/z dependency will be dropped in the following and convolution is *)

  • in any fixed order calculation the prediction depend on the factorization and renormalization scale F2(Q2) -> F2(Q2, muR2, muF2)
  • the prediction is given by F2(Q2,muR2,muF2) = f(muF2) * c(Q2,muR2,muF2)
  • now, the default is muF2=muR2=Q2 and hence: F2(Q2,muR2=Q2,muF2=Q2) = f(muF2=Q2) * c(Q2,muR2=Q2,muF2=Q2) = f(Q02) * E(Q02->Q2,1) * c(Q2,muR2=Q2,muF2=Q2) where the second argument to the eko is "fact_to_ren" (see discussion below)
  • Scheme C is [MHOU, eq. 3.46]: the PDF scale is varied, but not the evolution and the compensation is in the coefficient fnc
  • Scheme B is [MHOU, eq. 3.42] (note that we can't do [MHOU, eq. 3.44]): the text says here the PDF scale is also varied, but looking at [MHOU, eq. 3.43] I'm not sure ... the derivative is DGLAP and hence the thing is proportional to the starting point; the coefficient fnc is always evaluated at muF2=Q2
  1. muF2=2Q2=muR2:
  • in scheme C this should be: F2@c(Q2,muR2=2Q2,muF2=2Q2) = f(muF2=2Q2) * c(Q2,muR2=2Q2,muF2=2Q2) = f(Q02) * E(Q02->2Q2,1) * c(Q2,muR2=2Q2,muF2=2Q2)
  • in scheme B this should be: F2@b(Q2,muR2=2Q2,muF2=2Q2) = f'(muF2=Q2) * c(Q2,muR2=2Q2,muF2=Q2) = f(Q02) * E(Q02->Q2,2) * c(Q2,muR2=2Q2,muF2=Q2)
  1. muF2=2Q2, muR2=Q2/2
  • in scheme C this should be: F2@c(Q2,muR2=Q2/2,muF2=2Q2) = f(muF2=2Q2) * c(Q2,muR2=Q2/2,muF2=2Q2) = f(Q02) * E(Q02->2Q2,1) * c(Q2,muR2=Q2/2,muF2=2Q2) and hence the same eko as in 1
  • in scheme B this should be: F2@b(Q2,muR2=Q2/2,muF2=2Q2) = f'(muF2=Q2) * c(Q2,muR2=Q2/2,muF2=Q2) = f(Q02) * E(Q02->Q2,2) * c(Q2,muR2=Q2/2,muF2=Q2) and hence the same eko as in 1
  1. muF2=Q2/2, muR2=Q2/2
  • in scheme C this should be: F2@c(Q2,muR2=Q2/2,muF2=Q2/2) = f(muF2=Q2/2) * c(Q2,muR2=Q2/2,muF2=Q2/2) = f(Q02) * E(Q02->Q2/2,1) * c(Q2,muR2=Q2/2,muF2=Q2/2)
  • in scheme B this should be: F2@b(Q2,muR2=Q2/2,muF2=Q2/2) = f''(muF2=Q2) * c(Q2,muR2=Q2/2,muF2=Q2) = f(Q02) * E(Q02->Q2,1/2) * c(Q2,muR2=Q2/2,muF2=Q2)

Now, I believe fact_to_ren is really a bad name since it is not the ratio between the renormalization scale of the process and the respective factorization scale, but it is the ratio between the scales inside DGLAP. @andreab1997 if my understanding is correct than I believe your current version of the NLO theories again is not correct since my example do not contain the case fact_to_ren=4, do you agree? Moreover you should only use one or the other, i.e. in Scheme B only fact_to_ren and never XIF, and so e.g. https://github.com/NNPDF/fktables/blob/main/data/theory_7.yaml is also wrong. It was correct and needed to introduce a new scale, but the name is really bad. I don't have any great suggestions on how to improve, but XIF -> DIS_XIF, XIR -> DIS_XIR, fact_to_ren -> EVOL_XIF

@alecandido
Copy link
Member

  • the text says here the PDF scale is also varied, but looking at [MHOU, eq. 3.43] I'm not sure

This I don't understand: [MHOU, eq. 3.43] is consistent with the text, it is evaluating the PDF at t_f, so the scale is varied.

Now, I believe fact_to_ren is really a bad name since it is not the ratio between the renormalization scale of the process and the respective factorization scale, but it is the ratio between the scales inside DGLAP.

I've never written it down, but this is somewhat close to what I suspect since some time. Even the definition you're keep repeating since some time, that the renorm scale is the scale of as it doesn't make much sense, since inside DGLAP as evolution is used for completely internal purposes, so it should not be sensitive renorm scale vars (indeed mur is always varied inside the coefficient function).
The renorm scale is the scale at which you're renormalizing your perturbative calculation while computing the process (the scalee of MSbar renormalization conditions), so it's always on the process side. Conversely, it is correct that factorization scale is the scale of PDFs, since it is actually the scale you choose in the factorization theorem, and your PDFs only appear at that scale, so it's just fine.

@alecandido
Copy link
Member

alecandido commented Apr 8, 2022

About names let's do some more brainstorming: we need

  • better names for the theory database, prefixed when required
  • better names for runcards, but I don't want to choose the exact same of the theory database (since there PTO will remain PTO, but I'd like to have something more meaningful, even if I need to type a few letters more)
    • moreover, as already said several times, I'd like to stop including in our theory cards parameters that are completely unneeded for the program
    • and I'd like even to change where some parameters are (without affecting the database): for the eko runcards Q0 should be (squared in the first place and) put aside Q2grid in the operator card, since it's not really a theory parameter (while we need it in the theory database, so it will stay where it is, at least up to a further major revision, that it's not going to happen soon, and maybe never)

And let's start using good names everywhere: no more Q2 in eko, not even in the runcards, but only muf2 (and I'd like to replace xgrid by zgrid, but this might be harder...).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants