Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Add Einsum converter #8985

Merged
merged 7 commits into from
Sep 15, 2021
Merged

[ONNX] Add Einsum converter #8985

merged 7 commits into from
Sep 15, 2021

Conversation

anwang2009
Copy link
Contributor

As title

cc @mbrookhart @AndrewZhaoLuo

thanks @AndrewZhaoLuo for maintaining https://tvm.apache.org/docs/dev/relay_add_op.html, this doc was super helpful!

@junrushao
Copy link
Member

This is really nice! CC @ZihengJiang @hzfan

Copy link
Contributor

@AndrewZhaoLuo AndrewZhaoLuo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! I'm glad there was an already written einsum topi ! Some initial comments

include/tvm/relay/attrs/transform.h Outdated Show resolved Hide resolved
python/tvm/relay/op/_transform.py Outdated Show resolved Hide resolved
Copy link
Contributor

@AndrewZhaoLuo AndrewZhaoLuo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just move the files like masa says

"""einsum cuda strategy"""
strategy = _op.OpStrategy()
# TODO: Add cuda-specific op implementation for einsum
strategy.add_implementation(
Copy link
Member

@masahi masahi Sep 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would lead to an error if you try to run it on cuda right? It is better to remove this strategy until we have a CUDA schedule ready. The error message would be clearer than the one from incorrect scheduling.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test_onnx_nodes tests run on CUDA and have passed. I will remove if you think it's necessary to remove despite that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh interesting. I didn't know that topi.generic.schedule_extern somehow generates a valid schedule.

@masahi masahi merged commit e44f6c0 into apache:main Sep 15, 2021
@masahi
Copy link
Member

masahi commented Sep 15, 2021

AndrewZhaoLuo added a commit to AndrewZhaoLuo/tvm that referenced this pull request Sep 16, 2021
* main: (102 commits)
  Implementation of relay_to_tir target hook (apache#8423)
  [Onnx] Fix NLL Loss tests (apache#8971)
  [Bugfix] Fix other div zero errors also in rewrite_simplify (apache#8983)
  [ONNX] enable the onnx tests after PR apache#8274 merged (apache#9019)
  [Hexagon] Disable `thread_local` on Hexagon (apache#9025)
  [Hexagon] Allow undefined symbols in libtvm_runtime.so on Hexagon (apache#9024)
  [Onnx] Add momentum (apache#9000)
  fix (apache#9021)
  [Community] @AndrewZhaoLuo -> Reviewer (apache#9020)
  [Hexagon] Implement model launcher (apache#8986)
  [Relay][Pass] Add ExtractOperators pass (apache#8996)
  [BYOC][TensorRT] Add TensorRT own int8 calibration support to TensorRT BYOC integration (apache#8808)
  [ONNX] Add Einsum converter (apache#8985)
  Add standalone_crt/ to be part of the wheel package, when available. (apache#9005)
  [Relay] Remove memory planing from LowerTEPass  (apache#8974)
  [Hexagon] Treat floats as float32 when passing args to offloaded kernels (apache#9010)
  [Runtime] Pipeline Executor Initial patch. (apache#8702)
  [Hexagon] `llvm-options` attribute is an array of strings (apache#9011)
  disable cuda int8 schedule for non-cuda gpu target (apache#9014)
  [Torch] Add an option to make imported models compatible with the Relay text parser (apache#9015)
  ...
ylc pushed a commit to ylc/tvm that referenced this pull request Sep 29, 2021
* einsum

* address review

* move files around

* use generic topi op

* TODO comment

* jostle ci

* jostle ci
ylc pushed a commit to ylc/tvm that referenced this pull request Jan 13, 2022
* einsum

* address review

* move files around

* use generic topi op

* TODO comment

* jostle ci

* jostle ci
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants