-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature[next]: Use global type information in CollapseTuple pass #1260
feature[next]: Use global type information in CollapseTuple pass #1260
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some questions
@@ -80,7 +86,9 @@ def visit_FunCall(self, node: ir.FunCall, **kwargs) -> ir.Node: | |||
# tuple argument differs, just continue with the rest of the tree | |||
return self.generic_visit(node) | |||
|
|||
if self.ignore_tuple_size or _get_tuple_size(first_expr) == len(node.args): | |||
if self.ignore_tuple_size or _get_tuple_size(self._node_types[id(first_expr)]) == len( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that I want self._node_types[id(first_expr)]
to become a pattern. Any thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a clear opinion on this. On the one hand we want to avoid using the annex as it is easy to screw things up with it and the pattern is also not great. On the other hand using a dictionary with the id of a node as a key and running type inference again and again for every pass is also terrible. I am currently leaning towards using the annex because I am worried that the performance of type inference will become a problem sooner than we are able to find a proper solution. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you say annex in this context, you mean we guarantee that the type is always updated/preserved in all passes, right? Otherwise we would also have to run it before each pass.
Independent of the performance problem, is there any disadvantage in using a pass-local annex here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With respect to the annex being "dangerous" yes. If it's only pass local you can still screw things up, but it is easier to find the problem (i.e. you do a transformation of the IR and some of the newly created nodes don't have a type or the type changed) as it is local to the pass.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See also this commit on how the code looks like if we use the annex: a09f18f
@@ -56,8 +57,13 @@ def apply( | |||
If `ignore_tuple_size`, apply the transformation even if length of the inner tuple | |||
is greater than the length of the outer tuple. | |||
""" | |||
node_types = it_type_inference.infer_all(node) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now we will be running type inference on the full program in every pass that needs types? What does it do to pass performance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know, but probably it is terrible :-(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed, we leave it as is for now.
Previously the
CollapseTuple
Pass ran type inference only on the local expression it was currently considering. This PR uses the global information that works for more cases.Blocked by
#1259,#1255