Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay/TOPI][Op] Add batch_matmul in relay and TOPI #2561

Merged
merged 17 commits into from
Mar 1, 2019

Conversation

icemelon
Copy link
Member

@icemelon icemelon commented Feb 4, 2019

This commit aims to add batch_dot op in both TOPI and Relay.

  • Add batch_dot op in Relay
  • Add x86 schedule in TOPI
  • Add GPU schedule in TOPI

Copy link
Contributor

@yinghai yinghai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM


@generic.schedule_batch_dot.register(["cpu"])
def schedule_batch_dot(outs):
"""Schedule for softmax
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo softmax

@yinghai
Copy link
Contributor

yinghai commented Feb 5, 2019

Given the description of the computation, batch_matmul might be a more general name?

@icemelon
Copy link
Member Author

icemelon commented Feb 5, 2019

@yinghai Yes, I agree. I'll update it in the new commit.

@icemelon icemelon changed the title [WIP][Relay/TOPI][Op] Add batch_dot in relay and TOPI [Relay/TOPI][Op] Add batch_matmul in relay and TOPI Feb 13, 2019
@icemelon
Copy link
Member Author

@yinghai @Laurawly @Huyuwei Could you help review this PR? Thanks.

Copy link
Contributor

@Laurawly Laurawly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

def test_batch_matmul():
verify_batch_matmul(1, 16, 16, 32)
verify_batch_matmul(5, 16, 16, 32)
verify_batch_matmul(5, 16, 20, 32)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe more test cases with batch size > 16?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added test case for larger batch size.

@Laurawly
Copy link
Contributor

@yzhliu @Huyuwei Could you add your reviews or approve this PR?

@icemelon
Copy link
Member Author

@ZihengJiang test_quantize_pass failed. Could you set rtol to a higher threshold?

@Laurawly Laurawly added the status: need update need update based on feedbacks label Feb 25, 2019
@tqchen
Copy link
Member

tqchen commented Feb 27, 2019

@icemelon9 can you try rebase and try to get it in?

@icemelon
Copy link
Member Author

Sure. But I'm still waiting on @ZihengJiang to fix the quantize pass test case.

@Laurawly Laurawly merged commit 8459006 into apache:master Mar 1, 2019
@Laurawly
Copy link
Contributor

Laurawly commented Mar 1, 2019

Thanks @icemelon9 @yinghai @ZihengJiang , this is now merged.

@yzhliu yzhliu mentioned this pull request Mar 2, 2019
28 tasks
@icemelon icemelon deleted the batchdot branch March 4, 2019 05:13
bwasti pushed a commit to facebookexperimental/tvm that referenced this pull request Mar 6, 2019
* Add batch_dot and cpu schedule

* Add relay support for batch_dot

* Rename batch_dot to batch_matmul

* nits

* Add missing file

* Put batch_matmul and dense x86 schedule in separate files

* Fix pylint

* Remove unused import

* Add cuda schedule for batch_matmul

* Add test case with larger batch size

* Add batch_matmul in api doc

* Fix quantize pass rounding error

* Fix pylint and minor change

* bug fix
wweic pushed a commit to neo-ai/tvm that referenced this pull request Mar 9, 2019
* Add batch_dot and cpu schedule

* Add relay support for batch_dot

* Rename batch_dot to batch_matmul

* nits

* Add missing file

* Put batch_matmul and dense x86 schedule in separate files

* Fix pylint

* Remove unused import

* Add cuda schedule for batch_matmul

* Add test case with larger batch size

* Add batch_matmul in api doc

* Fix quantize pass rounding error

* Fix pylint and minor change

* bug fix
wweic pushed a commit to neo-ai/tvm that referenced this pull request Mar 12, 2019
* Add batch_dot and cpu schedule

* Add relay support for batch_dot

* Rename batch_dot to batch_matmul

* nits

* Add missing file

* Put batch_matmul and dense x86 schedule in separate files

* Fix pylint

* Remove unused import

* Add cuda schedule for batch_matmul

* Add test case with larger batch size

* Add batch_matmul in api doc

* Fix quantize pass rounding error

* Fix pylint and minor change

* bug fix
wweic pushed a commit to neo-ai/tvm that referenced this pull request Mar 12, 2019
* Add batch_dot and cpu schedule

* Add relay support for batch_dot

* Rename batch_dot to batch_matmul

* nits

* Add missing file

* Put batch_matmul and dense x86 schedule in separate files

* Fix pylint

* Remove unused import

* Add cuda schedule for batch_matmul

* Add test case with larger batch size

* Add batch_matmul in api doc

* Fix quantize pass rounding error

* Fix pylint and minor change

* bug fix
@apivovarov
Copy link
Contributor

apivovarov commented Jan 16, 2020

In some cases relay.build hangs. The issue caused by topi/python/topi/x86/dense.py Line 194.

More info https://discuss.tvm.ai/t/relay-build-hangs-for-5d-max-pool3d-reshape-matmul/5398

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: need review status: need update need update based on feedbacks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants