Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MetaSchedule] Fix autoinline for single const consumer block #12668

Merged
merged 1 commit into from
Sep 1, 2022

Conversation

shingjan
Copy link
Contributor

@shingjan shingjan commented Sep 1, 2022

This PR intends to fix the CUDA auto-inline schedule rule while processing TIR block with one single constant consumer block.
Before this PR, the TIR added in the test will throw the following error while being auto-inlined as it shouldn't be.:

E           ScheduleError: An error occurred in the schedule primitive 'compute-inline'.
E           The IR with diagnostic is:
E           # from tvm.script import tir as T
E           @tvm.script.ir_module
E           class Module:
E               @T.prim_func
E               def main(T_full: T.Buffer[(1, 12, 4096), "int64"]) -> None:
E                   # function attr dict
E                   T.func_attr({"global_symbol": "main", "tir.noalias": True})
E                   # body
E                   # with T.block("root")
E                   for i0, i1, i2 in T.grid(1, 12, 4096):
E                       # tir.Block#0
E                       with T.block("T_full"):
E                       ^^^^^^^^^^^^^^^^^^^^^^^
E                           ax0, ax1, ax2 = T.axis.remap("SSS", [i0, i1, i2])
E                           T.reads()
E                           T.writes(T_full[ax0, ax1, ax2])
E                           T_full[ax0, ax1, ax2] = T.int64(0)
E               
E           Error message: The block tir.Block#0 is an output block

Models impacted by this fix: vision_maskrcnn & hf_Reformer

cc: @zxybazh @junrushao @vinx13

cc @Hzfengsy @junrushao1994

Copy link
Member

@zxybazh zxybazh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks for sending in the fix.

@Hzfengsy Hzfengsy merged commit 32f9a5f into apache:main Sep 1, 2022
@shingjan
Copy link
Contributor Author

shingjan commented Sep 1, 2022

Thanks @zxybazh @Hzfengsy !

xinetzone pushed a commit to daobook/tvm that referenced this pull request Nov 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants