Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SMEFT 4-fermion corrections #306

Draft
wants to merge 8 commits into
base: master
Choose a base branch
from
Draft

SMEFT 4-fermion corrections #306

wants to merge 8 commits into from

Conversation

ElieHammou
Copy link
Collaborator

@ElieHammou ElieHammou commented Oct 21, 2024

Implementation of the semi-leptonic SMEFT 4-fermion operators corrections.

The goal is to implement equations from 16 to 19 in https://arxiv.org/pdf/2204.07557

@ElieHammou ElieHammou requested review from felixhekhorn and removed request for felixhekhorn October 21, 2024 12:53
@felixhekhorn felixhekhorn added the physics physics features label Oct 22, 2024
Comment on lines 303 to 309
# Need to specify Wilson coefficient?
C_4F = 1/4
# Modify with more precise value
alpha = 1 / 137
# Should get it from param card
BSM_scale = 1000
eta_ph4F = ((C_4F) / (4 * np.pi*alpha)) *(Q2/BSM_scale**2)
Copy link
Contributor

@felixhekhorn felixhekhorn Oct 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess, all three parameters should come from the outside? Then I wonder if we should condense them all into one $P^2 = \Lambda^2 \cdot 4\pi \alpha /C_{4F}$ (the name is random) such that eta_ph4f = Q2/P2

Copy link
Collaborator Author

@ElieHammou ElieHammou Oct 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In practice there is a redundancy between $C_{4F}$ and $\Lambda$ since we only consider dimension-6 operators, so only $\Lambda$ would typically be defined in the runcard.
It could make sense to condense the prefactor $P = C_{4F} / (4 * \pi * \alpha)$ you are right but the ratio $Q^2 / \Lambda^2$ is the kind of thing that is intuitive to see I think

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say this depends if there is another dependency on either of these things - if there is not, I think we should collapse them to avoid an ambiguous configuration

@felixhekhorn
Copy link
Contributor

Can you please add a link to the paper here in the head of the PR? We also need to add something to the documentation; I'm not 100% sure yet where, but maybe we just make a new page "SMEFT" (or similar) or we add it to the misc page https://yadism.readthedocs.io/en/latest/theory/misc.html ?

Copy link
Contributor

@felixhekhorn felixhekhorn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

src/yadism/coefficient_functions/coupling_constants.py Outdated Show resolved Hide resolved
src/yadism/coefficient_functions/coupling_constants.py Outdated Show resolved Hide resolved
src/yadism/coefficient_functions/coupling_constants.py Outdated Show resolved Hide resolved

# BSM couplings -------------------------------------------------------
# Temporary couplings to Olq3
self.BSM_coupling_vectorial = {21: 0}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder is "BSM" too generic? i.e. should we be more specific, like e.g. "4F"*? please tell me! because from my non-existent BSM experience I only know there are many BSM models 🙃

*Python variables can not start with a number, but we can solve that problem 🙃

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed you are right! I realise I have already started mixing up BSM and 4F labels, 4F is a more precise one and I'll try to stick with it

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd even be more specific and use Olq3, as you can have multiple 4F operators right?

@ElieHammou
Copy link
Collaborator Author

The test failure makes sense since I did not modify the _fl11 functions. I can adapt that. Are they used for the asy model?

However, I have run a real life test and was able to generate grids without errors with the code as it is but when I checked the prediction when convoluted with nnpdf4.0, I had zeros everywhere so something is off...

@ElieHammou
Copy link
Collaborator Author

Actually I have the same problem with the master branch with a pipeline successfully tested with the pip install version. Any idea what is up?

@felixhekhorn
Copy link
Contributor

The test failure makes sense since I did not modify the _fl11 functions. I can adapt that.

yes, of course we need to adjust all the other calls

Are they used for the asy model?

the "fl11" is another flavour combination which opens at N3LO and which is not tight to a specific "heavyness"

However, I have run a real life test and was able to generate grids without errors with the code as it is but when I checked the prediction when convoluted with nnpdf4.0, I had zeros everywhere so something is off...

Actually I have the same problem with the master branch with a pipeline successfully tested with the pip install version. Any idea what is up?

Mmm can you give a bit more context? like theory and obscard; also we don't really need the pineline - yadism on it's own is good enough to compute an observable (of course you can do it also through pinefarm). Stating the obvious: start with the simplest options everywhere, but the new feature (so LO, no IC, no SV, etc )

@ElieHammou
Copy link
Collaborator Author

The thing that confuses me is that the issue seems to come from the python environment I use rather than the cards. I have two environments, yadism installed with pip and yadism_dev installed from source with poetry. With the same observable and theory cards I get non zero predictions with yadism and only zeros with yadism_dev (even with the master branch).

The observable cards look like this:

NCPositivityCharge: null
PolarizationDIS: 0.0
ProjectileDIS: electron
PropagatorCorrection: 0.0
TargetDIS: proton
interpolation_is_log: true
interpolation_polynomial_degree: 4
interpolation_xgrid:
- 1.9999999999999954e-07
- 3.034304765867952e-07
- 4.6035014748963906e-07
- 6.984208530700364e-07
- 1.0596094959101024e-06
- 1.607585498470808e-06
- 2.438943292891682e-06
- 3.7002272069854957e-06
- 5.613757716930151e-06
- 8.516806677573355e-06
- 1.292101569074731e-05
- 1.9602505002391748e-05
- 2.97384953722449e-05
- 4.511438394964044e-05
- 6.843744918967896e-05
- 0.00010381172986576898
- 0.00015745605600841445
- 0.00023878782918561914
- 0.00036205449638139736
- 0.0005487795323670796
- 0.0008314068836488144
- 0.0012586797144272762
- 0.0019034634022867384
- 0.0028738675812817515
- 0.004328500638820811
- 0.006496206194633799
- 0.009699159574043398
- 0.014375068581090129
- 0.02108918668378717
- 0.030521584007828916
- 0.04341491741702269
- 0.060480028754447364
- 0.08228122126204893
- 0.10914375746330703
- 0.14112080644440345
- 0.17802566042569432
- 0.2195041265003886
- 0.2651137041582823
- 0.31438740076927585
- 0.3668753186482242
- 0.4221667753589648
- 0.4798989029610255
- 0.5397572337880445
- 0.601472197967335
- 0.6648139482473823
- 0.7295868442414312
- 0.7956242522922756
- 0.8627839323906108
- 0.9309440808717544
- 1
observables:
  XSHERANC:
  - Q2: 2.0
    x: 2.0e-06
    y: 0.7143
  - Q2: 2.0
    x: 5.0e-06
    y: 0.2857
  - Q2: 2.0
    x: 8.5e-06
    y: 0.1681
  - Q2: 2.0
    x: 2.0e-05
    y: 0.0714
  - Q2: 2.0
    x: 5.0e-05
    y: 0.0286
  - Q2: 2.0
    x: 8.5e-05
    y: 0.0168
  - Q2: 2.0
    x: 0.0002
    y: 0.0071
  - Q2: 2.0
    x: 0.0005
    y: 0.0029
  - Q2: 2.0
    x: 0.00085
    y: 0.0017
  - Q2: 5.0
    x: 5.0e-06
    y: 0.7143
  - Q2: 5.0
    x: 8.5e-06
    y: 0.4202
  - Q2: 5.0
    x: 2.0e-05
    y: 0.1786
  - Q2: 5.0
    x: 5.0e-05
    y: 0.0714
  - Q2: 5.0
    x: 8.5e-05
    y: 0.042
...
  - Q2: 500000.0
    x: 0.69999999
    y: 0.5102
  - Q2: 500000.0
    x: 0.80000001
    y: 0.4464
prDIS: NC

And here is the theory one:

  # QCD perturbative order
  PTO: 2  # perturbative order in alpha_s: 0 = LO (alpha_s^0), 1 = NLO (alpha_s^1) ...

  # SM parameters and masses
  CKM: "0.97428 0.22530 0.003470 0.22520 0.97345 0.041000 0.00862 0.04030 0.999152"  # CKM matrix elements
  GF: 1.1663787e-05  # [GeV^-2] Fermi coupling constant
  MP: 0.938  # [GeV] proton mass
  MW: 80.398  # [GeV] W boson mass
  MZ: 91.1876  # [GeV] Z boson mass
  alphaqed: 0.007496252  # alpha_em value
  kcThr: 1.0  # ratio of the charm matching scale over the charm mass
  kbThr: 1.0  # ratio of the bottom matching scale over the bottom mass
  ktThr: 1.0  # ratio of the top matching scale over the top mass
  mc: 1.51  # [GeV] charm mass
  mb: 4.92  # [GeV] bottom mass
  mt: 172.5  # [GeV] top mass

  # Flavor number scheme settings
  FNS: "FFNS"  # Flavour Number Scheme, options: "FFNS", "FFN0", "ZM-VFNS"
  NfFF: 4  # (fixed) number of running flavors, only for FFNS or FFN0 schemes
  Q0: 1.65  # [GeV] reference scale for the flavor patch determination
  nf0: 4  # number of active flavors at the Q0 reference scale

  # Alphas settings and boundary conditions
  Qref: 91.2  # [GeV] reference scale for the alphas value
  nfref: 5  # number of active flavors at the reference scale Qref
  alphas: 0.118  # alphas value at the reference scale
  MaxNfAs: 5  # maximum number of flavors in running of strong coupling
  QED: 0  # QED correction to running of strong coupling: 0 = disabled, 1 = allowed

  # Scale Variations
  XIF: 1.0  # ratio of factorization scale over the hard scattering scale
  XIR: 1.0  # ratio of renormalization scale over the hard scattering scale

  # Other settings
  IC: 1  # 0 = perturbative charm only, 1 = intrinsic charm allowed
  TMC: 1  # include target mass corrections: 0 = disabled, 1 = leading twist, 2 = higher twist approximated, 3 = higher twist exact
  n3lo_cf_variation: 0  # N3LO coefficient functions variation: -1 = lower bound, 0 = central , 1 = upper bound

  # Other EKO settings, not relevant for Yadism
  HQ: "POLE"  # heavy quark mass scheme (not yet implemented in yadism)
  MaxNfPdf: 5  # maximum number of flavors in running of PDFs (ignored by yadism)
  ModEv: "EXA"  # evolution solver for PDFs (ignored by yadism)

I generate the grids with this script:

import warnings
from yadbox.export import dump_pineappl_to_file

# Assuming observable_cards is a dictionary where each key is an observable name
# and each value is the corresponding observable card
predictions = {}  # Dictionary to store the results for each observable

for observable, observable_card in observable_cards.items():
    with warnings.catch_warnings():
        warnings.simplefilter("ignore")  # skip noisy warnings
        out = yadism.run_yadism(theory_card, observable_card)
        predictions[observable] = out  # Store the result using the same key
        dump_pineappl_to_file(out, f"{output_directory}/{observable}_BSM_test.pineappl.lz4", renormalisation)

@ElieHammou
Copy link
Collaborator Author

Ok, I figured out that yadism is actually able to generate the predictions well. The issue is when they are dumped to pineappl grids.

This script gives me predictions which agree with what I had in my environment yadism:

import lhapdf

# load the PDF set
pdf = lhapdf.mkPDF("NNPDF40_nnlo_as_01180")

values = out.apply_pdf(pdf)

print(json.dumps(values, sort_keys=False, indent=4))

Something goes wrong when I do:

dump_pineappl_to_file(out, f"{output_directory}/{observable}_test_source_new.pineappl.lz4", renormalisation)

I guess the issue comes from a conflict with the pineappl version. I just pip installed it here and I got version 0.8.6. How would you recommend installing it when installing yadism with poetry?

@felixhekhorn
Copy link
Contributor

How would you recommend installing it when installing yadism with poetry?

The recommended way is to make use of the standard Python "extras" (see e.g. here). In this case the extra is called "box", so

poetry install -E box

should do the trick

I guess the issue comes from a conflict with the pineappl version. I just pip installed it here and I got version 0.8.6

however, the above command should pull in again a similar version (meaning the latest of the 0.8 series), so I don't know if that would help. Let me ping @giacomomagni : did you run recently yadism (e.g. for the polarized stuff)? which pineappl version did you use?

Ok, I figured out that yadism is actually able to generate the predictions well.

that at least is good to hear 🙃

@giacomomagni
Copy link
Collaborator

giacomomagni commented Oct 23, 2024

So if I recall correctly we versions above 0.7.5 should work with current yadism.


# BSM couplings -------------------------------------------------------
# Temporary couplings to Olq3
self.BSM_coupling_vectorial = {21: 0}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd even be more specific and use Olq3, as you can have multiple 4F operators right?

projectile_v = self.vectorial_coupling(abs(projectile_pid))
projectile_a = self.weak_isospin_3[abs(projectile_pid)]
projectile_BSM_v = self.BSM_coupling_vectorial[abs(projectile_pid)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe for consistency you should also rename projectile_v to projectile_Z_v and same for _a.
Or maybe this has to be even generalized more if you have multiple operators...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Giacomo,
This makes good sense, thanks! I have implemented it

@ElieHammou
Copy link
Collaborator Author

poetry install -E box

This did the trick thanks! I'm not completely sure what caused the issue but I think a function got renamed between version 0.7 and 0.8 and it was messing my script up.

Anyway, it seems to work now!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
physics physics features
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants