Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poor type inference/performance using CartesianRange #18721

Closed
amckinlay opened this issue Sep 28, 2016 · 8 comments
Closed

Poor type inference/performance using CartesianRange #18721

amckinlay opened this issue Sep 28, 2016 · 8 comments
Labels
compiler:inference Type inference performance Must go faster

Comments

@amckinlay
Copy link
Contributor

Create a Cartesian iterator to sum elements in a fixed dimension.

julia> function foo(x::AbstractArray)
           acc = 0
           R = CartesianRange(size(x)[2:end])
           for I = R
               acc += x[1, I]
           end
           acc
       end
foo (generic function with 1 method)

The function causes over a million allocations and is kind of slow.

julia> x = trunc(Int, rand(25, 25, 25, 25, 25) * 10);

julia> foo(x)
1759015

julia> @time foo(x)
  0.099376 seconds (1.56 M allocations: 71.524 MB, 5.76% gc time)
1759015

Looking at the @code_warntype foo(x), inference cannot infer the type of R.

Asserting the type leads to much better performance:

julia> function bar{T}(x::AbstractArray{T,5})
           acc = 0
           R::CartesianRange{CartesianIndex{4}} = CartesianRange(size(x)[2:end])
           for I = R
               acc += x[1, I]
           end
           acc
       end
bar (generic function with 1 method)

julia> bar(x)
1759015

julia> @time bar(x)
  0.006065 seconds (10 allocations: 512 bytes)
1759015

@code_warntype bar(x)

Incidentally, asserting the type for any array size does not seem to work:

julia> function buzz{T,N}(x::AbstractArray{T,N})
           acc = 0
           R::CartesianRange{CartesianIndex{N-1}} = CartesianRange(size(x)[2:end])
           for I = R
               acc += x[1, I]
           end
           acc
       end
buzz (generic function with 1 method)

julia> buzz(x)
1759015

julia> @time buzz(x)
  0.090423 seconds (1.56 M allocations: 71.524 MB, 5.75% gc time)
1759015

@code_warntype buzz(x)

@TotalVerb
Copy link
Contributor

TotalVerb commented Sep 29, 2016

Note that size(x)[2:end] is type-unstable. This would require a solution similar to #17880 (that is, a combination of inlining and inference) to be made type stable. It might be easier simply to provide some type-stable tuple indexing functions, like size(x)[Val{2:end}].

@JeffBezanson
Copy link
Member

A good solution here would be to detect loops like this and move them to separate functions for specialization. That would handle many cases that have come up.

@timholy
Copy link
Member

timholy commented Sep 29, 2016

Jeff's solution is good. Another one is
CartesianRange(Base.tail(size(x))). Base has a wealth of unexported
type-stable tuple functions like tail, front, fill_to_size, and split.

On Wed, Sep 28, 2016 at 7:48 PM, Jeff Bezanson notifications@github.com
wrote:

A good solution here would be to detect loops like this and move them to
separate functions for specialization. That would handle many cases that
have come up.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#18721 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABdG6SiYZ33HdohpEWxAS_58Quwvl32-ks5quwr1gaJpZM4KJcAD
.

@kshyatt kshyatt added performance Must go faster compiler:inference Type inference labels Sep 30, 2016
@andyferris
Copy link
Member

andyferris commented Sep 30, 2016

+1 to Tim's solution. Though I think it might be nice to make indexing of tuples (and with tuples) a bit easier.

Inference has special tfuncs for scalar indexing of a tuple. We could allow indexing a tuple with a tuple - giving a tuple, and define a tfunc for this? I currently support indexing with a tuple in StaticArrays for the inferrable number of elements and it works quite well (though I think it is more correct to index with a StaticArray than a tuple and I might change this).

A pure function for endof(::Tuple) and some supporting syntax along the lines of mytuple[2:end] (maybe a pure colon function) or even mytuple[(2:end)] (probably a breaking syntax change) or a pure version of (a:b...) where a and b are Const, could potentially make this easier to write without special functions like tail.

@TotalVerb
Copy link
Contributor

@andyferris The problem is simpler than that; we just need special tfuncs for UnitRange to make this type stable. But special casing individual cases is not a long term solutions. I think we will eventually need to iterate inlining and inference. It should only take one additional pass of inference after inlining to remove the type instability.

@andyferris
Copy link
Member

True that, to both points. OTOH it would've interesting if users could add tfuncs beyond pure functions. Not sure if that makes any sense.

I'm super excited about what we can do with iterated inlining and inference. This would help lots with StaticArrays and TypedTables and API design and performance more generally. Though I'm getting more and more convinced that it needs to be iterated until convergence (e.g. inling becomes a step within the inference loop). If I write one package that gets good performance with a single iteration and I interact with another package that needs a single iteration, then boom bad performance ensues - and even worse it'll be super unintuitive to the average user as to why.

@KristofferC
Copy link
Member

This seems like an expected consequence of the type instability. General performance improvements for type unstable code is of course good but I don't think this issue need to be kept open.

@amckinlay
Copy link
Contributor Author

@KristofferC So what is the best guidance to address this performance issue as of 2018? Use non-exported functions like @timholy recommends (e.g., Base.tail(size(x))?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference performance Must go faster
Projects
None yet
Development

No branches or pull requests

7 participants