Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: revert back to one index for some elementwise operations #18929

Merged
merged 1 commit into from
Oct 25, 2016

Conversation

KristofferC
Copy link
Member

This reenables SIMD to work for a few elementwise operations that I think regressed with #16498

Benchmark run:
https://github.com/JuliaCI/BaseBenchmarkReports/blob/a6f6f33385f31746c2294f8d10e42caedc09db85/5f550ea_vs_4ba21aa/report.md

Ref: #15356 (comment)

What are your opinions about this @timholy?

Backport?

@KristofferC
Copy link
Member Author

Yeah, this breaks some tests specifically made for this case I guess. Not sure what to do.

@nalimilan
Copy link
Member

If only some changes trigger regressions, we could revert only the unproblematic ones. But it may well be that all changed loops can be used with a mix of LinearSlow and LinearFast arrays, in which case we can't get performance for both...

@KristofferC
Copy link
Member Author

Could always special case for Vector.

@KristofferC
Copy link
Member Author

@nanosoldier runbenchmarks(ALL, vs = ":master")

@timholy
Copy link
Member

timholy commented Oct 15, 2016

Looking at this in its current state, I'm fine with it. (Obviously, we want to get to the point where this is not necessary, but for now it seems to be.) Thanks for tackling this!

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels

@KristofferC
Copy link
Member Author

The * could be real?

@KristofferC
Copy link
Member Author

@nanosoldier runbenchmarks("linalg", vs = ":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - no performance regressions were detected. A full report can be found here. cc @jrevels

@KristofferC
Copy link
Member Author

That's very odd.

@jrevels
Copy link
Member

jrevels commented Oct 17, 2016

Master changed between the two comparison runs. Out of curiosity, let's compare vs the original commit:

@nanosoldier runbenchmarks("linalg", vs = "@8a376a00729bb7a30a2f921c9c1f78e04810eb56")

Note that I've also seen some reproducible erroneous perf changes based on benchmark run order - in some cases I've been able to fix it, but in others I'm still not sure what's going on. This could be something like that (running all benchmarks vs. running only a few benchmarks).

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - no performance regressions were detected. A full report can be found here. cc @jrevels

@KristofferC
Copy link
Member Author

Let's try ALL then

@nanosoldier runbenchmarks(ALL, vs = ":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels

@jrevels
Copy link
Member

jrevels commented Oct 17, 2016

Good news is that this is the first time we've confirmed this problem "in the wild" - I've had hunches about this for a while, but haven't had any real test case outside of contrived experiments. My initial guess was that these kind of discrepancies are introduced in large consecutive runs via some kind of swap behavior, but really nailing it down requires more profiling than I've done.

At least until this thing can be fixed (not for a long time if I'm the only one tracking it, since I don't have any time in the near future for Nanosoldier experimentation), you should double-check locally to confirm lack of regressions. Obviously not ideal, but at least it should only take a few seconds if you already have the julia builds on your machine.

@KristofferC
Copy link
Member Author

Good to go?

@KristofferC
Copy link
Member Author

bump

@timholy timholy merged commit d867128 into master Oct 25, 2016
@timholy
Copy link
Member

timholy commented Oct 25, 2016

Thanks very much!

@KristofferC KristofferC deleted the kc/arr_perf branch October 25, 2016 09:35
tkelman pushed a commit that referenced this pull request Feb 22, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Must go faster
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants