Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MetaSchedule] Fuse loops around shared to global store block in MultiLevelTilingTensorCore #13357

Merged
merged 2 commits into from
Nov 11, 2022

Conversation

masahi
Copy link
Member

@masahi masahi commented Nov 11, 2022

Currently, vectorization of shared to global store in tensor core auto tensorization is not done properly, since most blocks have the T.where predicate which disables vectorization.

The predicate is introduced after Split in cooperative fetch: https://github.com/apache/tvm/blob/main/src/meta_schedule/postproc/rewrite_cooperative_fetch.cc#L159-L162
As the code says, this split is supposed to be applied to a fused loop. This is the case for cache read blocks, where AddReadReuse explicitly fuses loops around cache read blocks. But AddWriteReuseTensorCore doesn't fuse loops after cache write: https://github.com/apache/tvm/blob/main/src/meta_schedule/schedule_rule/multi_level_tiling_tensor_core.cc#L260-L262.

So for cache rewrite blocks, we always try to split a single axis by large factors like [None, 4, 32, 2]. Unless the sampled factor for the axis is large, we always get T.where in the shared to global copy block.

This PR adds the missing fusion. Now, all candidate samples have the shared to global copy block properly vectorized. But unfortunately, there was no perf improvement from this change after e2e tuning.

For quantized workloads, vectorization of shared to global copy is disabled, since we end up vectorizing also requantization-related math, involving 64 bit arithmetic. The generated code fails to compile currently.

@vinx13 @junrushao

@tvm-bot
Copy link
Collaborator

tvm-bot commented Nov 11, 2022

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

@masahi masahi merged commit 5364e5a into apache:main Nov 11, 2022
xinetzone pushed a commit to daobook/tvm that referenced this pull request Nov 25, 2022
…tiLevelTilingTensorCore` (apache#13357)

* Fuse shared to global store loops in MultiLevelTilingTensorCore

* update test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants