-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concatenation is slower than it could be #21673
Comments
This bug is making |
This doesn't look like an inference problem. |
Shouldn't the compiler know A is a vector of ints? |
There's no longer an |
To clarify what @TotalVerb is saying – the fact that I would note that in either function, calling |
I see, thanks for the explanation, I've updated the OP. |
Any idea what the issue is here? This is still a problem on release 0.6 |
Somewhat better on 1.4-dev: julia> @benchmark test1(20)
BenchmarkTools.Trial:
memory estimate: 928 bytes
allocs estimate: 17
--------------
minimum time: 1.113 μs (0.00% GC)
median time: 1.408 μs (0.00% GC)
mean time: 1.464 μs (4.21% GC)
maximum time: 317.081 μs (99.30% GC)
--------------
samples: 10000
evals/sample: 10
julia> @benchmark test2(20)
BenchmarkTools.Trial:
memory estimate: 256 bytes
allocs estimate: 1
--------------
minimum time: 49.606 ns (0.00% GC)
median time: 54.791 ns (0.00% GC)
mean time: 59.367 ns (4.12% GC)
maximum time: 676.165 ns (88.80% GC)
--------------
samples: 10000
evals/sample: 987 So we're only ~20x slower than ideal, instead of ~300x. Progress! (I updated the OP with the |
For the example in the original issue, where we are concatenating a range with a number of the same type, the issue could be fixed by changing the signature of the method below to Line 1010 in 24c468d
I can make the PR if that's acceptable, though it would be nice to fix the more general case as well. The issue seems to be with the general Note that there is a Line 1440 in 1e6d771
We would also need to widen the signature of both PS: Note that this |
The `cat` pipeline has long had poor inferrability. Together with #39292 and #39294, this should basically put an end to that problem. Together, at least in simple cases these make the performance of `cat` essentially equivalent to the manual version. In other words, the `test1` and `test2` of #21673 benchmark very similarly.
There is a regressions on 1.6 compared to 1.5, so hopefully we can backport @timholy's PRs. On 1.5, the slowdown is ~26x, whereas on 1.6 beta1, I get a 74x slowdown:
|
The `cat` pipeline has long had poor inferrability. Together with #39292 and #39294, this should basically put an end to that problem. Together, at least in simple cases these make the performance of `cat` essentially equivalent to the manual version. In other words, the `test1` and `test2` of #21673 benchmark very similarly. (cherry picked from commit 78d55e2)
The `cat` pipeline has long had poor inferrability. Together with #39292 and #39294, this should basically put an end to that problem. Together, at least in simple cases these make the performance of `cat` essentially equivalent to the manual version. In other words, the `test1` and `test2` of #21673 benchmark very similarly. (cherry picked from commit 78d55e2)
The `cat` pipeline has long had poor inferrability. Together with JuliaLang#39292 and JuliaLang#39294, this should basically put an end to that problem. Together, at least in simple cases these make the performance of `cat` essentially equivalent to the manual version. In other words, the `test1` and `test2` of JuliaLang#21673 benchmark very similarly.
The `cat` pipeline has long had poor inferrability. Together with JuliaLang#39292 and JuliaLang#39294, this should basically put an end to that problem. Together, at least in simple cases these make the performance of `cat` essentially equivalent to the manual version. In other words, the `test1` and `test2` of JuliaLang#21673 benchmark very similarly.
The `cat` pipeline has long had poor inferrability. Together with #39292 and #39294, this should basically put an end to that problem. Together, at least in simple cases these make the performance of `cat` essentially equivalent to the manual version. In other words, the `test1` and `test2` of #21673 benchmark very similarly. (cherry picked from commit 78d55e2)
This is an issue on 0.5.1 and 0.6:
__ this was not an inference issue, the case below was updated to reflect that __
The text was updated successfully, but these errors were encountered: