-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: revert back to one index for some elementwise operations #18929
Conversation
Yeah, this breaks some tests specifically made for this case I guess. Not sure what to do. |
If only some changes trigger regressions, we could revert only the unproblematic ones. But it may well be that all changed loops can be used with a mix of |
Could always special case for |
5f550ea
to
50dc3d7
Compare
@nanosoldier |
Looking at this in its current state, I'm fine with it. (Obviously, we want to get to the point where this is not necessary, but for now it seems to be.) Thanks for tackling this! |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels |
The |
@nanosoldier |
Your benchmark job has completed - no performance regressions were detected. A full report can be found here. cc @jrevels |
That's very odd. |
Master changed between the two comparison runs. Out of curiosity, let's compare vs the original commit: @nanosoldier Note that I've also seen some reproducible erroneous perf changes based on benchmark run order - in some cases I've been able to fix it, but in others I'm still not sure what's going on. This could be something like that (running all benchmarks vs. running only a few benchmarks). |
Your benchmark job has completed - no performance regressions were detected. A full report can be found here. cc @jrevels |
Let's try ALL then @nanosoldier |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels |
Good news is that this is the first time we've confirmed this problem "in the wild" - I've had hunches about this for a while, but haven't had any real test case outside of contrived experiments. My initial guess was that these kind of discrepancies are introduced in large consecutive runs via some kind of swap behavior, but really nailing it down requires more profiling than I've done. At least until this thing can be fixed (not for a long time if I'm the only one tracking it, since I don't have any time in the near future for Nanosoldier experimentation), you should double-check locally to confirm lack of regressions. Obviously not ideal, but at least it should only take a few seconds if you already have the julia builds on your machine. |
Good to go? |
bump |
Thanks very much! |
This reenables SIMD to work for a few elementwise operations that I think regressed with #16498
Benchmark run:
https://github.com/JuliaCI/BaseBenchmarkReports/blob/a6f6f33385f31746c2294f8d10e42caedc09db85/5f550ea_vs_4ba21aa/report.md
Ref: #15356 (comment)
What are your opinions about this @timholy?
Backport?