Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TorchToLinalg] address a dtype mismatch in aten.multinomial lowering #3630

Merged
merged 6 commits into from
Aug 20, 2024

Conversation

zjgarvey
Copy link
Collaborator

@zjgarvey zjgarvey commented Aug 13, 2024

Resolves #3628
Unblocks a compile failure for one of the MiGraphx models (AgentModel).

@zjgarvey
Copy link
Collaborator Author

Huh, I don't even know what to say about a torchscript numerics mismatch...

double causes onnx numerics failure, float32 causes linalg numerics failure
@zjgarvey
Copy link
Collaborator Author

zjgarvey commented Aug 15, 2024

This is frustrating. Perhaps we should be using the TestUtils-generated sample args for the torchscript onnx export?? Neither zeros nor ones work uniformly for all tests.

@zjgarvey zjgarvey force-pushed the multinomial_patch branch 2 times, most recently from 7fc400a to 102a01c Compare August 15, 2024 20:17
Copy link
Member

@pashu123 pashu123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@zjgarvey zjgarvey merged commit f66908f into llvm:main Aug 20, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

aten.Multinomial tries to add f32 and f64 together
2 participants