-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Poor type inference/performance using CartesianRange #18721
Comments
Note that |
A good solution here would be to detect loops like this and move them to separate functions for specialization. That would handle many cases that have come up. |
Jeff's solution is good. Another one is On Wed, Sep 28, 2016 at 7:48 PM, Jeff Bezanson notifications@github.com
|
+1 to Tim's solution. Though I think it might be nice to make indexing of tuples (and with tuples) a bit easier. Inference has special tfuncs for scalar indexing of a tuple. We could allow indexing a tuple with a tuple - giving a tuple, and define a tfunc for this? I currently support indexing with a tuple in StaticArrays for the inferrable number of elements and it works quite well (though I think it is more correct to index with a StaticArray than a tuple and I might change this). A pure function for |
@andyferris The problem is simpler than that; we just need special tfuncs for |
True that, to both points. OTOH it would've interesting if users could add tfuncs beyond pure functions. Not sure if that makes any sense. I'm super excited about what we can do with iterated inlining and inference. This would help lots with StaticArrays and TypedTables and API design and performance more generally. Though I'm getting more and more convinced that it needs to be iterated until convergence (e.g. inling becomes a step within the inference loop). If I write one package that gets good performance with a single iteration and I interact with another package that needs a single iteration, then boom bad performance ensues - and even worse it'll be super unintuitive to the average user as to why. |
This seems like an expected consequence of the type instability. General performance improvements for type unstable code is of course good but I don't think this issue need to be kept open. |
@KristofferC So what is the best guidance to address this performance issue as of 2018? Use non-exported functions like @timholy recommends (e.g., |
Create a Cartesian iterator to sum elements in a fixed dimension.
The function causes over a million allocations and is kind of slow.
Looking at the
@code_warntype foo(x)
, inference cannot infer the type ofR
.Asserting the type leads to much better performance:
@code_warntype bar(x)
Incidentally, asserting the type for any array size does not seem to work:
@code_warntype buzz(x)
The text was updated successfully, but these errors were encountered: