Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RELAY][OP] Relay Op Sprint (Part 2) #2051

Closed
60 of 66 tasks
jroesch opened this issue Nov 1, 2018 · 6 comments
Closed
60 of 66 tasks

[RELAY][OP] Relay Op Sprint (Part 2) #2051

jroesch opened this issue Nov 1, 2018 · 6 comments

Comments

@jroesch
Copy link
Member

jroesch commented Nov 1, 2018

This is follow-up work to #1799. Now that we have merged an initial version of the Relay evaluator and runtime system in #1954 it is possible to use Relay for end-to-end inference, and optimization.

In order to do so we need to add attributes to the existing operators so that they may be correctly lowered to TVM.

General Steps of Porting

In order to enable lowering of an operator we need to add both a scheduling and compute primitive.

#2050 shows an example for all of the ops in tensor.py

The List

The List

Level 1: Common Basic Ops

  • nn.dense
  • nn.relu
  • tanh
  • sigmoid
  • exp
  • log
  • sqrt
  • add
  • subtract
  • multiply
  • divide
  • mod
  • nn.batch_flatten
  • concatenate
  • nn.softmax
  • nn.log_softmax
  • nn.batch_norm
  • nn.dropout
  • expand_dims

Level 2: Convolutions

  • nn.conv2d
  • nn.conv2d_transpose
  • nn.max_pool2d
  • nn.avg_pool2d
  • nn.global_max_pool2d
  • nn.global_avg_pool2d
  • nn.pad
  • nn.lrn

Level 3: Additional Math And Transform Operators

  • reshape
  • copy
  • negative
  • floor
  • ceil
  • round
  • trunc
  • clip
  • abs
  • leaky_relu
  • tranpose
  • split
  • squeeze
  • take
  • full
  • zeros
  • ones
  • transpose
  • zeros_like
  • ones_like

Level 4: All broadcast and reduction functions that are not in previous level

  • pow
  • less
  • greater
  • less_than
  • greater_than
  • right_shift
  • left_shift
  • maximum
  • minimum
  • sum
  • max
  • prod
  • argmax, argmin
  • strided_slice
  • broadcast_to
  • where

Level 5: Vision Operators

  • image.resize
  • vision.multibox_prior
  • vision.nms
@MarisaKirisame
Copy link
Contributor

dense, softmax, log_softmax, concatenate, dropout

@MarisaKirisame
Copy link
Contributor

argmax, argmin, take

@masahi
Copy link
Member

masahi commented Nov 14, 2018

I noticed that the layout_transform op is missing in the above list ans also in #1799. This is important for cpu inference. Is anybody working on this? @yzhliu @kevinthesun

@kevinthesun
Copy link
Contributor

@masahi layout_transform needs to be refactored, since the current implementation is in nnvm instead of topi. This is also important for graph tuner. @yzhliu Do we have any plan?

@yzhliu
Copy link
Member

yzhliu commented Nov 17, 2018

@kevinthesun @masahi Yes I'm working on it. ref https://discuss.tvm.ai/t/datalayout-structure/80

BTW sorry for mess up the edit history, my edition has been reverted. @jroesch

@tqchen
Copy link
Member

tqchen commented Dec 14, 2018

Thanks to everyone's effort, most of the ops are now supported. Let us open new threads to catch the remaining outliers when necessary

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants