-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Relay/TOPI][Op] Add batch_matmul in relay and TOPI #2561
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
topi/python/topi/x86/nn.py
Outdated
|
||
@generic.schedule_batch_dot.register(["cpu"]) | ||
def schedule_batch_dot(outs): | ||
"""Schedule for softmax |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo softmax
Given the description of the computation, batch_matmul might be a more general name? |
@yinghai Yes, I agree. I'll update it in the new commit. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
def test_batch_matmul(): | ||
verify_batch_matmul(1, 16, 16, 32) | ||
verify_batch_matmul(5, 16, 16, 32) | ||
verify_batch_matmul(5, 16, 20, 32) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe more test cases with batch size > 16?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added test case for larger batch size.
@ZihengJiang test_quantize_pass failed. Could you set rtol to a higher threshold? |
@icemelon9 can you try rebase and try to get it in? |
Sure. But I'm still waiting on @ZihengJiang to fix the quantize pass test case. |
Thanks @icemelon9 @yinghai @ZihengJiang , this is now merged. |
* Add batch_dot and cpu schedule * Add relay support for batch_dot * Rename batch_dot to batch_matmul * nits * Add missing file * Put batch_matmul and dense x86 schedule in separate files * Fix pylint * Remove unused import * Add cuda schedule for batch_matmul * Add test case with larger batch size * Add batch_matmul in api doc * Fix quantize pass rounding error * Fix pylint and minor change * bug fix
* Add batch_dot and cpu schedule * Add relay support for batch_dot * Rename batch_dot to batch_matmul * nits * Add missing file * Put batch_matmul and dense x86 schedule in separate files * Fix pylint * Remove unused import * Add cuda schedule for batch_matmul * Add test case with larger batch size * Add batch_matmul in api doc * Fix quantize pass rounding error * Fix pylint and minor change * bug fix
* Add batch_dot and cpu schedule * Add relay support for batch_dot * Rename batch_dot to batch_matmul * nits * Add missing file * Put batch_matmul and dense x86 schedule in separate files * Fix pylint * Remove unused import * Add cuda schedule for batch_matmul * Add test case with larger batch size * Add batch_matmul in api doc * Fix quantize pass rounding error * Fix pylint and minor change * bug fix
* Add batch_dot and cpu schedule * Add relay support for batch_dot * Rename batch_dot to batch_matmul * nits * Add missing file * Put batch_matmul and dense x86 schedule in separate files * Fix pylint * Remove unused import * Add cuda schedule for batch_matmul * Add test case with larger batch size * Add batch_matmul in api doc * Fix quantize pass rounding error * Fix pylint and minor change * bug fix
In some cases relay.build hangs. The issue caused by More info https://discuss.tvm.ai/t/relay-build-hangs-for-5d-max-pool3d-reshape-matmul/5398 |
This commit aims to add batch_dot op in both TOPI and Relay.