-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RELAY][OP] Relay Op Sprint (Part 2) #2051
Comments
dense, softmax, log_softmax, concatenate, dropout |
argmax, argmin, take |
I noticed that the layout_transform op is missing in the above list ans also in #1799. This is important for cpu inference. Is anybody working on this? @yzhliu @kevinthesun |
@kevinthesun @masahi Yes I'm working on it. ref https://discuss.tvm.ai/t/datalayout-structure/80 BTW sorry for mess up the edit history, my edition has been reverted. @jroesch |
Thanks to everyone's effort, most of the ops are now supported. Let us open new threads to catch the remaining outliers when necessary |
This is follow-up work to #1799. Now that we have merged an initial version of the Relay evaluator and runtime system in #1954 it is possible to use Relay for end-to-end inference, and optimization.
In order to do so we need to add attributes to the existing operators so that they may be correctly lowered to TVM.
General Steps of Porting
In order to enable lowering of an operator we need to add both a scheduling and compute primitive.
#2050 shows an example for all of the ops in
tensor.py
The List
The List
Level 1: Common Basic Ops
Level 2: Convolutions
Level 3: Additional Math And Transform Operators
Level 4: All broadcast and reduction functions that are not in previous level
Level 5: Vision Operators
The text was updated successfully, but these errors were encountered: