-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Relay][ONNX] Batch_matmul to dense optimization #8440
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise LGTM. Do we need a test case for this?
b5e4207
to
e83d9b9
Compare
@comaniac Thanks for the valuable comments. I found the Pytorch implementation https://github.com/apache/tvm/blob/main/python/tvm/relay/frontend/pytorch.py#L1624 cannot handle dynamic shapes. So I added this optimization based on the ONNX implementation. For some reason, the weight matrix of nn.dense must be static for the tvm codegen to work correctly, though using libs like mkl works just fine. I created an issue for this #8441. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Just a nit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks @ymwangg |
* [ONNX]Add batch_matmul to dense optimization * Add extra check to avoid unnecessary reshape Co-authored-by: Ubuntu <ubuntu@ip-172-31-14-16.us-west-2.compute.internal>
* [ONNX]Add batch_matmul to dense optimization * Add extra check to avoid unnecessary reshape Co-authored-by: Ubuntu <ubuntu@ip-172-31-14-16.us-west-2.compute.internal>
This PR copied the Pytorch matmul implementation that adopts the batch_matmul to dense optimization, which significantly improves the performance of some matmul ops in bert models such as [4, 128, 768] x [768, 768] when using cublas or tensorrt.