Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

transformations: (convert-stencil-to-csl-stencil) Make pass runnable before tensorization #3220

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

n-io
Copy link
Collaborator

@n-io n-io commented Sep 26, 2024

Making convert-stencil-to-csl-stencil runnable before tensorisation should address a variety of issues that we have with the tensorisation pass, and provide us with the opportunity to store important bits and blobs in our csl_stencil dialect.

Currently, the pass is runnable both before and after tensorisation, which results in some degree of code duplication.

@n-io n-io added the transformations Changes or adds a transformatio label Sep 26, 2024
@n-io n-io self-assigned this Sep 26, 2024
Copy link

codecov bot commented Sep 26, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 89.98%. Comparing base (5349938) to head (9baa99a).
Report is 3 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3220      +/-   ##
==========================================
- Coverage   90.03%   89.98%   -0.06%     
==========================================
  Files         433      433              
  Lines       54568    54669     +101     
  Branches     8465     8499      +34     
==========================================
+ Hits        49133    49196      +63     
- Misses       4061     4099      +38     
  Partials     1374     1374              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@dk949 dk949 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple of comments. Mostly due to me not really understanding what's happening here 😅

Comment on lines +370 to 378
if isa(self.swaps, ArrayAttr[dmp.ExchangeDeclarationAttr])
else type(self.accumulator.type)(
self.accumulator.type.get_element_type(),
(
len(self.swaps),
self.accumulator.type.get_shape()[0]
// self.num_chunks.value.data,
),
),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this could be constructed outside of the list? It's quite hard to follow where one element of the list begins and another ends.

for use in uses:
if not isinstance(use.operation, stencil.ApplyOp):
continue
apply_op = use.operation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will there always be only one apply in this situation? If so we could assert apply_op is None (or something similar).

Or alternatively:

for use in uses:
    if isinstance(use.operation, stencil.ApplyOp):
        apply_op = use.operation
        break
else:
	raise RuntimeError("Expected ApplyOp")

Comment on lines +187 to +191
# output_size = temp.bounds
# if isinstance(output_size, StencilBoundsAttr):
# return
#
# update_result_size(op.res, temp.bounds | op.field.type.bounds, rewriter)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

commented code

Comment on lines +250 to +253
for use in uses:
if not isinstance(use.operation, stencil.ApplyOp):
continue
apply_op = use.operation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this not guaranteed to operate on the apply_op declared above?

assert all(
swap.size[2] == uniform_size for swap in op.swaps
), "all swaps need to be of uniform size"
# uses have to be retrieved *before* the loop because of the rewriting happening inside the loop
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is somewhat misleading, since there is no rewriting happening in this loop

@@ -41,6 +42,13 @@ def infer_core_size(op: LoadOp) -> tuple[IndexAttr, IndexAttr]:
from an LoadOp by walking the def-use chain down to the `apply`
"""
applies: list[ApplyOp] = list(all_matching_uses([op.res], ApplyOp))
if len(applies) == 0:
dmp_swaps: list[dmp.SwapOp] = list(all_matching_uses([op.res], dmp.SwapOp))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: do we need to materialise this dmp_swaps?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
transformations Changes or adds a transformatio
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants