-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert "Use Ref{T} type for BLAS calls" #11683
Conversation
The Travis failure seems unrelated. It happens on totally irrelevant PR as well... |
Wait, it seems my repo was dirty when I did this.. Sorry, will fix. Edit: Ok, there. |
I wonder whether this contributed to our recent noticeable increase in memory consumption of running the test suite. |
Is it possible to flag such performance regressions in the CI? That would be great if we could do it. |
Would be nice to get this up and running again: http://speed.julialang.org/ |
Whatever happened to that? That looks like a wonderful tool, but the only data I see seems to be from 0.2. |
I think it turned out to be a lot of effort to maintain. Needs someone to volunteer to maintain it, I suppose. |
I would find it more useful if huge regressions are in the CI report, inline on github. |
It is likely unfeasible to run benchmarks on each commit. It would however be cool if you could run the benchmarks on a PR by a command in the comments and having the resulted returned back to you after a while. To avoid spurious performance regressions being reported, the benchmarks would probably need to run on some dedicated hardware though. Of course, for this to be useful, there need to be a certain degree of coverage in the perf tests. |
Nightly (or weekly) should be good enough for performance benchmark. (I guess the biggest problem is still that someone has to do it...) |
Well, it would also be nice if you could manually run it on a PR before you merge it. Easier to fix stuff before it has been merged. |
+1000 for more perf regression testing I spend a lot of my time on this exact infrastructure problem at work. I'd be happy to do some work to get us up and running again. Ideally, I'd like to make it possible for packages to also run perf testing via a centralized service. That would require, at a minimum, hosting a service that registers perf numbers for packages that sign up for the service. (Assuming the numbers themselves were generated on people's personal machines.) |
Maybe we should create a dedicated issue for perf regression testing in general? Unless there is one already. |
Separate issue for general perf tracking sounds like a good idea. Signing up for and figuring out the right format to submit results to http://my.cdash.org/ might be a decent way to go for that. |
Great application to build in Julia. |
@tkelman Does cdash have good support for perf testing? I thought it was mostly to run correctness tests. It says how much time a test take but it only looks something like this: http://my.cdash.org/viewTest.php?buildid=780116. I spent some time trying to set up CDash for C.I for some software at my department but the config files for that thing are a real pain in the neck... |
Based on that testimonial I'll retract my suggestion. Discussion of perf tracking should move to #11709. This PR needs to be rebased due to the recently merged bugfix in |
Done (I hope). |
This reverts commit 62e5942. Fix #11681
I don't know if I failed the last rebase but Github said there were conflicts so I rebased again. |
As far as I could tell you did the last rebase correctly, but another conflict happened on master - probably the mass renaming of Guess we should merge this then? |
Revert "Use Ref{T} type for BLAS calls"
Refs currently allocates on the heap. This causes significant slow down in BLAS calls for small matrices. This PR reverts 62e5942 until the situation has improved. Maybe #8134 fixes it?
See: JuliaLang/LinearAlgebra.jl#220
and related: #11531