Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[OPENCL] Always use convert_T for type conversion #14972

Merged
merged 2 commits into from
Jun 1, 2023

Conversation

tqchen
Copy link
Member

@tqchen tqchen commented May 29, 2023

This PR changes the Cast in OpenCL to always relying on convert_T to get closer to the spec and more reliable.

@tvm-bot
Copy link
Collaborator

tvm-bot commented May 29, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

This PR changes the Cast in OpenCL to always relying on
convert_T to get closer to the spec and more reliable.
@@ -150,50 +158,38 @@ def check_type_casting(ctx, n, dtype):
tvm.tir.all(
*[
i // block_size == tvm.tir.const(3, "int32"),
i % block_size == tvm.tir.const(3, "int32"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test has covered a bug in TVM which was fixed. Why did you change the compute function? Can we keep it as it was?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i also migrated the original test to the latest TensorIR so the test is future proof.

Interestingly TensorIR becomes smarter :) so the original condition get simplified and becomes i == 15 here. So I modified the test to ensure we cover most behavior we intend to cover around type casting

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately relying on string pattern was too reliable given the set of possible transformations here. So we need to adapt the testcase as we go. If there is a new test cases in TVMScript, i think we can also put it here

Copy link
Contributor

@echuraev echuraev May 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so the original condition get simplified and becomes i == 15 here

Do you mean that this condition:

tvm.tir.all(
    *[
        i // block_size == tvm.tir.const(3, "int32"),
        i % block_size == tvm.tir.const(3, "int32"),
    ]
),

Was transformed to i == 15?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took a look at the git history and the original problem which should be covered by this test was that left and right parts of condition should have the same data type. You can see detailed description in #11021. And next this test was added in #11038.

By decreasing the number of conditions in tvm.tir.all, you prevent compiler to generate code such as lcond && rcond. Am I right? In this case, the original problem won't be tested. Sorry if I missed something.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I wasn't being clear, right

tvm.tir.all(
    *[
        i // block_size == tvm.tir.const(3, "int32"),
        i % block_size == tvm.tir.const(3, "int32"),
    ]
)

In TensorIR schedule the ^ condition get transformed to i == 15, as indeed that makes sense.

So we might need a different condition to test the chains.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK updated to a case where simplification won't happen

Copy link
Contributor

@echuraev echuraev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thank you :)

@tqchen
Copy link
Member Author

tqchen commented May 31, 2023

@tvm-bot rerun

1 similar comment
@echuraev
Copy link
Contributor

echuraev commented Jun 1, 2023

@tvm-bot rerun

@MasterJH5574 MasterJH5574 merged commit 7f02606 into apache:main Jun 1, 2023
@tqchen tqchen deleted the opencl branch February 15, 2025 14:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants