Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature[next]: Use global type information in CollapseTuple pass #1260

Merged
merged 23 commits into from
Jul 3, 2023

Conversation

tehrengruber
Copy link
Contributor

@tehrengruber tehrengruber commented May 30, 2023

Previously the CollapseTuple Pass ran type inference only on the local expression it was currently considering. This PR uses the global information that works for more cases.

Blocked by #1259, #1255

@tehrengruber tehrengruber changed the title feature[next]: Improve CollapseTuple pass feature[next]: Use type information from annex in CollapseTuple pass May 30, 2023
@tehrengruber tehrengruber changed the title feature[next]: Use type information from annex in CollapseTuple pass feature[next]: Use global type information in CollapseTuple pass Jun 1, 2023
@havogt havogt self-requested a review June 2, 2023 16:22
Copy link
Contributor

@havogt havogt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some questions

@@ -80,7 +86,9 @@ def visit_FunCall(self, node: ir.FunCall, **kwargs) -> ir.Node:
# tuple argument differs, just continue with the rest of the tree
return self.generic_visit(node)

if self.ignore_tuple_size or _get_tuple_size(first_expr) == len(node.args):
if self.ignore_tuple_size or _get_tuple_size(self._node_types[id(first_expr)]) == len(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that I want self._node_types[id(first_expr)] to become a pattern. Any thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a clear opinion on this. On the one hand we want to avoid using the annex as it is easy to screw things up with it and the pattern is also not great. On the other hand using a dictionary with the id of a node as a key and running type inference again and again for every pass is also terrible. I am currently leaning towards using the annex because I am worried that the performance of type inference will become a problem sooner than we are able to find a proper solution. What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you say annex in this context, you mean we guarantee that the type is always updated/preserved in all passes, right? Otherwise we would also have to run it before each pass.
Independent of the performance problem, is there any disadvantage in using a pass-local annex here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With respect to the annex being "dangerous" yes. If it's only pass local you can still screw things up, but it is easier to find the problem (i.e. you do a transformation of the IR and some of the newly created nodes don't have a type or the type changed) as it is local to the pass.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See also this commit on how the code looks like if we use the annex: a09f18f

@@ -56,8 +57,13 @@ def apply(
If `ignore_tuple_size`, apply the transformation even if length of the inner tuple
is greater than the length of the outer tuple.
"""
node_types = it_type_inference.infer_all(node)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now we will be running type inference on the full program in every pass that needs types? What does it do to pass performance?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know, but probably it is terrible :-(

src/gt4py/next/iterator/type_inference.py Outdated Show resolved Hide resolved
Copy link
Contributor

@havogt havogt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed, we leave it as is for now.

@tehrengruber tehrengruber merged commit 76c2f06 into GridTools:main Jul 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants