Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Locally change or ignore inference limits #22370

Closed
ChrisRackauckas opened this issue Jun 14, 2017 · 8 comments
Closed

Locally change or ignore inference limits #22370

ChrisRackauckas opened this issue Jun 14, 2017 · 8 comments

Comments

@ChrisRackauckas
Copy link
Member

It has been discussed before that some @infer_the_crap_out_of_this would be very useful:

#16734 (comment)

The MAX_TUPLETYPE_LEN is low enough that it's pretty easy to run into issues with it. When any code generation or optimization is occurring, the generated code tends to have these issues. See:

#22255
#22275

The problem there seems to be that even if you put parenthesis to avoid this issue, @fastmath can re-order operations and broadcast generates its own anonymous functions which might do this, meaning that you're prone to problems with any large line. In addition I see that many times types with large numbers of fields/parameters won't infer well:

https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/solve.jl#L284

and don't have an MWE for that yet but believe something similar is going on here.

It would be nice to be able to selectively tell the compiler "I know this line is longer than normal, please handle it anyways". If anything, it would at least make debugging easier if there was a way to selectively change this, i.e. a one-word macro to test "yes, this is an inference issue due to the recursion limit".

@vtjnash
Copy link
Member

vtjnash commented Jun 14, 2017

+1. I tested making it larger and the tests seemed to be fine with it.

diff --git a/base/inference.jl b/base/inference.jl
index ecb08f7..885de00 100644
--- a/base/inference.jl
+++ b/base/inference.jl
@@ -4,7 +4,7 @@ import Core: _apply, svec, apply_type, Builtin, IntrinsicFunction, MethodInstanc
 
 #### parameters limiting potentially-infinite types ####
 const MAX_TYPEUNION_LEN = 3
-const MAX_TYPE_DEPTH = 8
+const MAX_TYPE_DEPTH = 9001
 const TUPLE_COMPLEXITY_LIMIT_DEPTH = 3
 
 const MAX_INLINE_CONST_SIZE = 256
@@ -27,7 +27,7 @@ struct InferenceParams
     function InferenceParams(world::UInt;
                     inlining::Bool = inlining_enabled(),
                     max_methods::Int = 4,
-                    tupletype_len::Int = 15,
+                    tupletype_len::Int = 9001,
                     tuple_depth::Int = 4,
                     tuple_splat::Int = 16,
                     union_splitting::Int = 4,

@timholy
Copy link
Member

timholy commented Jun 14, 2017

9001 is way better than 42. inference.jl has come a long way in a short time!

@andyferris
Copy link
Member

I was going to say something like what Tim said. I'm really glad Julia compiles and runs with this.

I also like the @infer_harder idea. Though sometimes I wonder with all the improvements to inference correctness, whether widening should only trigger for recursions and not for tree-like call graphs (and making, amongst other things, Varargs specialization generally work when not a part of a recursive loop)

ChrisRackauckas added a commit to ChrisRackauckas/julia that referenced this issue Jun 26, 2017
ChrisRackauckas added a commit to ChrisRackauckas/julia that referenced this issue Jun 26, 2017
@ChrisRackauckas
Copy link
Member Author

#22389 likely falls falls into the same group of issues. I put in a PR which raises some numbers that seem to matter for the linked issues. I am admittedly not very knowledgeable of the internals in this area though.

@ChrisRackauckas
Copy link
Member Author

The issue this was originally for was fixed via #27398 . But there are still some other inference limits which can be involved like the splat limit, so I changed the title to reflect the possibility of wanting to possibly change other limits.

@vtjnash
Copy link
Member

vtjnash commented Mar 15, 2020

That's not actionable

@vtjnash vtjnash closed this as completed Mar 15, 2020
@oscardssmith
Copy link
Member

Why not? Is the macro described impossible to implement? Is this a Wont-Fix?

@vtjnash
Copy link
Member

vtjnash commented Mar 15, 2020

No, it's just not actionable. The original problem was badly stated. It proposed a solution, but didn't properly state a problem. We fixed the stated problem (MAX_TUPLETYPE_LEN too low) by removing that limit entirely.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants