-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dead branch elimination after inlining #17880
Comments
Ref #5560 |
Indeed. At JuliaCon I was suggesting that we have something along the lines of " Thought to be honest, I never understood the argument why inlining doesn't occur during inference, but after. It shouldn't affect the overall amount of inlining (well, unless a dynamic dispatch is replaced with an |
Because static method lookup (the immediate prerequisite to inlining) is not valid until type inference has converged. If we inlined during inference, we would have to be prepared to undo it on a future inference step. I think we'd also have the same need to iterate inference and inlining: even if you inlined a call site right after inferring it, you would have the option of re-inferring the inlined call site again right away to get even better type information, rinse and repeat. All this is just to say the only solution seems to be iterating full inference and inlining passes, and implementing it exactly that way seems to be simplest. |
@JeffBezanson are you saying that during (in the middle of) inference, the "inferred" type of a variable might be too narrow? So you might think you have a concrete method specialization, but that might change during further inference? (I assumed inference simply accumulated knowledge over time, apart from explicit widening). I see that the whole thing (inference/inline) would need to be iterated until convergence (or some other criteria), but hoped it could just be a part of the natural iteration procedure of inference... |
Does this mean this is something that you want to (or plan to) do? |
Yes; inference of a single function and type signature is an iterative process that needs to converge. I believe the combination of inference and inlining does not necessarily converge, but in practice a very small number of repetitions would probably catch all the useful cases. I do want to do this, only problem being the compiler is too slow already so we can hardly afford to infer each function twice. We'll need to (1) speed up the compiler a bit, (2) develop some heuristics to detect when repeating inference after inlining is useful. I think this is quite doable and may even be possible for 0.6. |
OK, awesome. I'd be really excited about this one. :)
OK, that's interesting. So when inference comes to a line of code, does it have some way of telling if that line has converged? Or just the entire function? I had guessed it was a top-to-bottom procedure (with some complications for branching, loops, etc, and jumping from function-to-function as necessary)... in which case you would be able to inline as soon as that function call had converged and the signature is concrete (rather than waiting for the entire function)... But, it goes without saying that you know how it works much, much better than me!! ;)
That's what widening is for, no? I'm trying to think of an example here (out of academic curiosity). Very interesting. |
Would it help to stop widening |
@yuyichao This would be very useful for me. I really want an |
@yuyichao Yes that would definitely give sharper type info. We could try it, but it seems like it would be very expensive. Want to re-run inference for both I'm sure it's possible to have a combined inference+inlining pass that works fine, but I think it would be incredibly difficult and complex. It tends to be hard to analyze something while it is changing. For example, the algorithm keeps track of sets of program locations. If the size of the program changes during the algorithm, you'd have to update everything. Possible, but annoying. See https://en.wikipedia.org/wiki/Data-flow_analysis for more about the algorithm. |
It is not immediately clear to me why value dispatch would help. Wouldn't inference still be unable to eliminate the branch? |
It can eliminate the branch since the inferred type of |
This specific issue is now fixed, since we track constant return values from functions:
|
In the below min-repro, julia successfully inlines
g(x)
;Nevertheless, inference does not eliminate the dead branch, and so the result is boxed:
Although LLVM successfully removes the branch, it is too late to unbox the return value.
Would it be feasible to run dead branch elimination after inlining?
(Related to this SO question: http://stackoverflow.com/questions/38811881/possible-to-constant-propagate-if-statement-without-dispatch)
The text was updated successfully, but these errors were encountered: