Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

transformations: (memref-streamify) don't streamify 0D memrefs #3677

Merged
merged 2 commits into from
Dec 26, 2024

Conversation

superlopuh
Copy link
Member

There's no point in streaming 0D memrefs, it costs more to set up the streaming registers. In the future we might want to canonicalize away the 0D memrefs on memref_stream operations to loads/stores and generic on scalars, which will be lowered to registers.

Note stacked PR.

@superlopuh superlopuh added the transformations Changes or adds a transformatio label Dec 25, 2024
@superlopuh superlopuh self-assigned this Dec 25, 2024
Copy link

codecov bot commented Dec 25, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 91.29%. Comparing base (6f59234) to head (28833de).
Report is 2 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #3677   +/-   ##
=======================================
  Coverage   91.29%   91.29%           
=======================================
  Files         466      466           
  Lines       58357    58357           
  Branches     5624     5624           
=======================================
  Hits        53278    53278           
  Misses       3629     3629           
  Partials     1450     1450           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mamanain
Copy link
Collaborator

Don't you want to deal with this during linalg -> stream_memref lowering? Just replace it with load/op/store there.

Base automatically changed from sasha/snitch-tensor/bottom-up-split to main December 25, 2024 19:30
@superlopuh
Copy link
Member Author

What's the advantage of doing it when transforminglinalg to memref_stream? I would rather keep that pass as simple as possible, and do all the transformations/legalizations in downstream passes. We can transform the memref<f64> inputs to scalars in a separate canonicalization pass.

Copy link
Collaborator

@mamanain mamanain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@superlopuh superlopuh merged commit d7f6068 into main Dec 26, 2024
16 checks passed
@superlopuh superlopuh deleted the sasha/snitch-tensor/streamify-empty branch December 26, 2024 11:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
transformations Changes or adds a transformatio
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants