Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dead branch elimination after inlining #17880

Closed
TotalVerb opened this issue Aug 7, 2016 · 13 comments
Closed

Dead branch elimination after inlining #17880

TotalVerb opened this issue Aug 7, 2016 · 13 comments
Labels
compiler:inference Type inference performance Must go faster

Comments

@TotalVerb
Copy link
Contributor

TotalVerb commented Aug 7, 2016

In the below min-repro, julia successfully inlines g(x);

g(x) = true
f(x) = g(x) ? 1 : 1.0

Nevertheless, inference does not eliminate the dead branch, and so the result is boxed:

julia> @code_warntype f(1)
Variables:
  #self#::#f
  x::Int64

Body:
  begin 
      unless $(QuoteNode(true)) goto 3
      return 1
      3: 
      return 1.0
  end::Union{Float64,Int64}

Although LLVM successfully removes the branch, it is too late to unbox the return value.

Would it be feasible to run dead branch elimination after inlining?

(Related to this SO question: http://stackoverflow.com/questions/38811881/possible-to-constant-propagate-if-statement-without-dispatch)

@KristofferC
Copy link
Member

Ref #5560

@andyferris
Copy link
Member

Indeed. At JuliaCon I was suggesting that we have something along the lines of "@inline_early" which does inlining during inference, not after. This would be incredibly useful in many contexts.

Thought to be honest, I never understood the argument why inlining doesn't occur during inference, but after. It shouldn't affect the overall amount of inlining (well, unless a dynamic dispatch is replaced with an @inlined method, but that is quite desirable, no?). Could anyone with more insight than me be able to explain why inlining is done this way at the moment, and if it is feasible to change?

@JeffBezanson
Copy link
Member

Because static method lookup (the immediate prerequisite to inlining) is not valid until type inference has converged. If we inlined during inference, we would have to be prepared to undo it on a future inference step. I think we'd also have the same need to iterate inference and inlining: even if you inlined a call site right after inferring it, you would have the option of re-inferring the inlined call site again right away to get even better type information, rinse and repeat.

All this is just to say the only solution seems to be iterating full inference and inlining passes, and implementing it exactly that way seems to be simplest.

@andyferris
Copy link
Member

@JeffBezanson are you saying that during (in the middle of) inference, the "inferred" type of a variable might be too narrow? So you might think you have a concrete method specialization, but that might change during further inference? (I assumed inference simply accumulated knowledge over time, apart from explicit widening).

I see that the whole thing (inference/inline) would need to be iterated until convergence (or some other criteria), but hoped it could just be a part of the natural iteration procedure of inference...

@andyferris
Copy link
Member

All this is just to say the only solution seems to be iterating full inference and inlining passes, and implementing it exactly that way seems to be simplest.

Does this mean this is something that you want to (or plan to) do?

@JeffBezanson
Copy link
Member

So you might think you have a concrete method specialization, but that might change during further inference?

Yes; inference of a single function and type signature is an iterative process that needs to converge.

I believe the combination of inference and inlining does not necessarily converge, but in practice a very small number of repetitions would probably catch all the useful cases. I do want to do this, only problem being the compiler is too slow already so we can hardly afford to infer each function twice. We'll need to (1) speed up the compiler a bit, (2) develop some heuristics to detect when repeating inference after inlining is useful. I think this is quite doable and may even be possible for 0.6.

@andyferris
Copy link
Member

OK, awesome. I'd be really excited about this one. :)

Yes; inference of a single function and type signature is an iterative process that needs to converge.

OK, that's interesting. So when inference comes to a line of code, does it have some way of telling if that line has converged? Or just the entire function? I had guessed it was a top-to-bottom procedure (with some complications for branching, loops, etc, and jumping from function-to-function as necessary)... in which case you would be able to inline as soon as that function call had converged and the signature is concrete (rather than waiting for the entire function)...

But, it goes without saying that you know how it works much, much better than me!! ;)

I believe the combination of inference and inlining does not necessarily converge

That's what widening is for, no? I'm trying to think of an example here (out of academic curiosity). Very interesting.

@yuyichao
Copy link
Contributor

yuyichao commented Aug 8, 2016

Would it help to stop widening Consts before inferring methods?

@andyferris
Copy link
Member

@yuyichao This would be very useful for me. I really want an @inline function to work more-or-less like a macro (with the obvious hygiene rules - having the inputs escaped unless/until they have been re-bound).

@JeffBezanson
Copy link
Member

@yuyichao Yes that would definitely give sharper type info. We could try it, but it seems like it would be very expensive. Want to re-run inference for both [1,2] and [3,4]? What we might want is a feature for requesting specialization on every value of a certain argument, similar to f{T}(::Val{T}).

I'm sure it's possible to have a combined inference+inlining pass that works fine, but I think it would be incredibly difficult and complex. It tends to be hard to analyze something while it is changing. For example, the algorithm keeps track of sets of program locations. If the size of the program changes during the algorithm, you'd have to update everything. Possible, but annoying.

See https://en.wikipedia.org/wiki/Data-flow_analysis for more about the algorithm.

@TotalVerb
Copy link
Contributor Author

It is not immediately clear to me why value dispatch would help. Wouldn't inference still be unable to eliminate the branch?

@yuyichao
Copy link
Contributor

yuyichao commented Aug 13, 2016

It can eliminate the branch since the inferred type of g(x) would be Const(true) instead of Bool. It can also help answering the question of whether it is useful to inline/do another recursion of type inference/constant propagation.

@JeffBezanson
Copy link
Member

This specific issue is now fixed, since we track constant return values from functions:

julia> @code_warntype f(1)
Variables:
  #self#::#f
  x::Int64

Body:
  begin 
      return 1
  end::Int64

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference performance Must go faster
Projects
None yet
Development

No branches or pull requests

5 participants