-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Add a flag to trace inference's heuristic limiting decisions #31000
Conversation
There's generally two reasons for sub-optimal type inference. 1) The user did something that inference couldn't reason about (read from a global, get an arbitrary object from the network, deserialization, etc). We now have some pretty good tooling to detect this (e.g. interactively in Cthulhu.jl and some basic automation in XLA.jl). 2) Inference decided that it wouldn't be worth further pursuing an inference avenue (e.g. because of recursion or because one of our other inference limits). Hitting the second case is frustrating, because the limits don't apply transitively (e.g. you may enter a method that infers fine if used as the starting method, but does not infer properly if called from another method). These cases are frustrating because our tooling gets confused (since at the leaf, it infers fine). They are also really hard to debug (esp to people not intimately familiar with inference). This attempts to add some tooling that allows users to ask inference to tell us when it hits this limits. The tracing is enabled by a special flag in the Params object and each kind of limit is supposed to be its own kind of object (in order for it to be automatically consumable by tooling). I also added a flag to code_typed that will just dump out each trace entry as an info log record. Example usage: ``` julia> f(::Val{4000}) = 0 f (generic function with 1 methods) julia> f(::Val{n}) where {n} = f(Val(n+1)) f (generic function with 2 methods) julia> @eval @code_typed optimize=false trace=true f(Val(1)) [ Info: Signature f(::Val{4}) was narrowed to f(::Val) due to recursion [EdgeCycleLimited] CodeInfo( 1 ─ %1 = ($(Expr(:static_parameter, 1)) + 1)::Const(2, false) │ %2 = Main.Val(%1)::Const(Val{2}(), true) │ %3 = Main.f(%2)::Const(0, false) └── return %3 ) => Int64 ```
Also cc @maleadt, since bad inference can cause GPU compile failures you may be interested in this for GPU to give better error messages also (for much the same reason I want it for XLA.jl). |
Well, that part of it doesn't bootstrap, but you get the idea. Will switch it something else. |
Widened? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would have made debugging #31485 much easier so I am all for this!
Any reason not to merge as an experimental debugging feature? |
@@ -984,6 +985,11 @@ function code_typed(@nospecialize(f), @nospecialize(types=Tuple); | |||
debuginfo == :none && remove_linenums!(code) | |||
push!(asts, code => ty) | |||
end | |||
if trace && !isempty(params.trace_buffer) | |||
for entry in params.trace_buffer | |||
@info entry |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
code_typed shouldn't print—and this seems more like a code_warntype feature anyways?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could separate this functionality out and expect users to pass in the right CustomParams
or alternatively a separate macro @explain
/ @WHY?!!
that does this.
@@ -1614,6 +1614,15 @@ function show(io::IO, src::CodeInfo; debuginfo::Symbol=:source) | |||
print(io, ")") | |||
end | |||
|
|||
# Show for inference limit trace objects | |||
function show(io::IO, ecl::Core.Compiler.EdgeCycleLimited) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't typically like Base
to have any references to Core.Compiler. Perhaps this should go into the IRShow module?
We have this now |
There's generally two reasons for sub-optimal type inference.
get an arbitrary object from the network, deserialization, etc). We now have
some pretty good tooling to detect this (e.g. interactively in Cthulhu.jl and
some basic automation in XLA.jl).
(e.g. because of recursion or because one of our other inference limits).
Hitting the second case is frustrating, because the limits don't apply transitively
(e.g. you may enter a method that infers fine if used as the starting method, but
does not infer properly if called from another method). These cases are frustrating
because our tooling gets confused (since at the leaf, it infers fine). They are also
really hard to debug (esp to people not intimately familiar with inference).
This attempts to add some tooling that allows users to ask inference to tell us when
it hits this limits. The tracing is enabled by a special flag in the Params object and
each kind of limit is supposed to be its own kind of object (in order for it to be
automatically consumable by tooling). I also added a flag to code_typed that will just
dump out each trace entry as an info log record. Example usage:
I intend to add more features to this before merging this and prototype some integration with the above mentioned tools, but wanted to open this early for discussion.