Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

set warp synchronous true for reducing RankedTensorType without layout #4663

Closed

Conversation

quintinwang5
Copy link

  • I am not making a trivial change, such as fixing a typo in a comment.

  • I have written a PR description following these
    rules.

  • I have run pre-commit run --from-ref origin/main --to-ref HEAD.

  • Select one of the following.

    • I have added tests.
      • /test for lit tests
      • /unittest for C++ tests
      • /python/test for end-to-end tests
    • This PR does not need a test because It's just a change about ReduceOpHelper, not in a lower pipeline.

…hronous

getSrcLayout may return null for RankedTensorType without layout. For these reduces, we should process them in one warp
@@ -168,6 +168,8 @@ unsigned ReduceOpHelper::getThreadsReductionAxis() {

bool ReduceOpHelper::isWarpSynchronous() {
auto srcLayout = getSrcLayout();
if (!srcLayout)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why would there be no layout on the source? This hekper is meant to be used for lowering of trgir to llvm so the tensor should have a layout

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

encoding is not necessary for RankedTensorType in MLIR. For reducing vector to scalar cases, I meet these cases without layout. Do you mean every tensor in ttgir must have a layout?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes in ttgir every tensor must have a layout

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants