Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarks: Automatic detection of flavors cancellation #83

Closed
alecandido opened this issue Oct 16, 2020 · 2 comments
Closed

Benchmarks: Automatic detection of flavors cancellation #83

alecandido opened this issue Oct 16, 2020 · 2 comments
Labels
benchmarks Benchmark (or infrastructure) related enhancement New feature or request

Comments

@alecandido
Copy link
Member

alecandido commented Oct 16, 2020

What

We noticed that there is a common source of differences (rel_err[%]) between us and APFEL coming from Q2 interpolation (or also anything APFEL-like, i.e. Q2 interpolating)

Proposed solution

  • in yadmark we can get the numbers for the separate channels from yadism (the "DIS operator") simply calling runner.get_result() and only after calling .apply_pdf() on the result
  • once the numbers for the separate channels are available we are able to detect the region, or better the exact ESF where a cancellation between flavors is happening (just looking for the maximum of each flavor channel and computing the absolute ratio with the summed result)
  • once that a cancellation is detected raise the absolute error for APFEL comparison

Absolute error value problem

The actual number to choose for the absolute error is difficult to find, because the proper value should be the sum of absolute errors on the individual flavor channels, but in order to find this we should be able to break the APFEL's result as well on the individual flavors, but if these results where available we could have compared directly the channels instead of the recombined sum.

My personal guess for this number is a function of the sum/max(flavors) ratio, protecting for yadism recombined result going to 0 (and so ratio=0 -> rel_err=100%) and protect for the full yadism result where the cancellation is still present but it is more likely APFEL to go to 0 (1%<ratio<20% -> abs_err=yad_result).

@alecandido alecandido added enhancement New feature or request test Add or improve tests labels Oct 16, 2020
@felixhekhorn
Copy link
Contributor

maybe instead of hiding/raising/changing the error, it is sufficient to give an explanation to the user and let him decide on how to adjust the assert

@alecandido alecandido added benchmarks Benchmark (or infrastructure) related and removed test Add or improve tests labels Feb 9, 2021
@felixhekhorn
Copy link
Contributor

we decided to only compare in the physical region

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmarks Benchmark (or infrastructure) related enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants