-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RELAY][OP] Relay Operator Sprint #1799
Comments
I had 'implemented' some elem wise function which I need for ad (negative, multiplication, division) |
I could get through resize operator (will PR once #1798 is merged) to start with and proceeding with transforms. |
I strongly agree with the numpy consistency, e.g. A good example could be that TensorFlow's API uses |
I can work on some shape-related APIs. |
@junrushao1994 |
@tqchen |
@srkreddy1238 |
One note about int32 vs int64 when constructing constant was raised by @junrushao1994 @srkreddy1238 . This is an issue we should think about now. int32 will likely cause some regression on large arrays which need to be fixed. I think we should prefer int64 when possible for constants, and let the compiler to automatically detect and downgrade to int32. A temporary workaround is always to keep the inferred shape type consistent with the input shape type, and we can make the switch more easily in one place later |
Another thing I am concerning is user friendliness. First, examples provided by Python API docs should be at least runnable by copy-pasting, like PyTorch (https://pytorch.org/docs/stable/tensors.html) or NumPy (https://docs.scipy.org/doc/numpy/reference/generated/numpy.expand_dims.html). Second, Python API docs should be self-contained, at least those designed for DL practitioners who may not take a good look at the C++ code. It does not seem to be a big deal for now, but we should put more effort into it in the future. |
+1 for API docs friendless, I would recommend we do it now than later. Maybe I am having a bad lead example in the conv2d docs as it was pretty minimum, I will send an updated PR to update that, and let us make sure the new ops are being well documented with examples, especially non-trival ones |
Expr like below is not getting simplified !!
any idea ? |
The eager CSE is done among integer expression only so far. For floating points, we still need to call explicitly simplification, or use as_const_int to get out and explicit simplify |
I am going to grab I am going to grab |
@junrushao1994 transpose should be in the list, sorry the list was not complete |
I am taking/had taken multiply/divide/mod/relu/tanh/sigmoid/negative. |
@MarisaKirisame |
To keep all of us on same page #1813 covers |
ok I am going to take
|
@srkreddy1238 you forgot to mention that you have done
|
take |
take |
I need squeeze for ad with broadcast (right now it is assuming no broadcast). I will take it. |
I had done zeros_like and ones_like. I think I will take zeros and ones too for symmetricity |
attempting maximum |
@tqchen Sounds cool! |
You can do this with mypy types, see #1781 for an example. We could add type annotations to the operators, which will provide the sanity checks you want. Mypy is a static analyzer, so in order to get its benefits you need to run it separately or have it integrated into your IDE. |
taking conv2d_transpose |
@tqchen how to handle multiple outputs? i was trying |
multiple outputs is needed for split too. |
@siju-samuel @srkreddy1238 multiple output is possible only by wrapping all of them in a tuple type |
edit: Sorry, @siju-samuel, I didn't see your comment and didn't mean to snipe you with those! Tell me if you're still trying those, or else I could finish my own attempts. I don't mind either way. |
i started with reduce ops, you can do with batch_norm & dropout |
broadcast_to, collapse_sum, broadcast_to_like, collapse_sum_like |
attempting where |
strided_slice |
Attempting split to conclude Level 3 |
Will attempt prod |
Thanks to everyone for the hard work on getting 99% of the way there. I'm making a push to now add the compute and scheduling behavior for all of these operators which should enable users to use Relay for end-to-end inference tasks, enable new frontends and more. If you you would be interested in helping read more here: #2051. |
@tqchen I believe all listed operators have been implemented, could you double check? |
Thanks to everyone for the hard work, this issue is closed as most ops are in, we will follow up in #2051 |
Now that the Relay RFC is being merged and we are stabilizing the type inference interface, we should sprint to add new operators to relay to make it on parity with NNVM.
#1798 shows an example on how to do so for conv2d operator.
General Steps of Porting
General Principles
List of Operators to be covered
Generally, we need to cover everything we have so far https://docs.tvm.ai/nnvm_top.html
Please use this issue to coordinate what you will be working on. As we expect things to move quickly, try to do "fine grained locking" and only claim things that you are working on right now and aim to get things in a few days.
The List
Level 1: Common Basic Ops
Enough to get MLP
Level 2: Convolutions
Enough to get convnet
Level 3: Additional Math And Transform Operators
Level 4: All broadcast and reduction functions that are not in previous level
Level 5: Vision Operators
Level 10: Backend Operators
Operators necessary as intermediate stage of optimizations, or gradient, can be influx
The text was updated successfully, but these errors were encountered: