-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
regression in inference performance caused by CodeInstance refactor #53459
Comments
Do we have a MWE for the regression? |
Yes, sorry, I should have pointed out that this is reported by BaseBenchmarks for this commit: https://github.com/JuliaCI/NanosoldierReports/blob/master/benchmark/by_hash/1c25d93_vs_c0a93f8/report.md |
When we use options like code coverage, we can't use the native code present in the cache file since it is not instrumented. PR #52123 introduced the capability of skipping the native code during loading, but created the issue that subsequent packages could have an explicit or implicit dependency on the native code. PR #53439 tainted the current process by setting `use_sysimage_native_code`, but this flag is propagated to subprocesses and lead to a regression in test time. Move this to a process local flag to avoid the regression. In the future we might be able to change the calling convention for cross-image calls to `invoke(ci::CodeInstance, args...)` instead of `ci.fptr(args...)` to handle native code not being present. --------- Co-authored-by: Jameson Nash <vtjnash@gmail.com>
FWIW, it seems like a lot of the regression has been fixed, but there's still a fairly large regression in the abstract interpretation benchmarks (in the daily benchmarks at least): Summary
Abstract Interpretation Benchmarks
|
I bisected a 6x regression in the min run time of I bisected a 5x regression in the min run time of |
Okay, the init_stdio regression is probably fine then, since we just significantly increased the amount of code visible to the compiler, but didn't change the compiler. |
It looks like #53219 causes some fairly extreme performance issues in inference (up to 50x longer inference times), though curiously also sometimes provides up to a 5x speed up
Originally posted by @vtjnash in #53219 (comment)
The text was updated successfully, but these errors were encountered: