-
Notifications
You must be signed in to change notification settings - Fork 3.7k
[Relax][PyTorch] Add ReLU6 Op Support for Exported Program and FX graph #17918
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@Hzfengsy Could you please review this PR? |
| nn.ReLU6: lambda node: self.block_builder.emit( | ||
| relax.op.clip(self.env[node.args[0]], 0, 6) | ||
| ), | ||
| nn.ReLU6: self._unary_op(relax.op.nn.relu6), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder why we need relu6 op defined in relax? In other words, why we need to change the current implementation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Hzfengsy Since the current clip mapping only handled the FX translator’s nn.ReLU6, it left other mappings (relu6.default, relu6_.default, and functional.relu6 in FX) unaddressed. Therefore, I added ReLU6 as a Relax op to unify the implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for explaining it. My concern is that relu6 should not be a relax operator as it can be represented by other ops (i.e. clip).
We can keep the relax.op.nn.relu6 interface as a sugar, but not register a new relax op.
To be clear, it's better to remove the new definition in:
- src/relax/op/nn/nn.cc
- src/relax/op/nn/nn.h
- src/contrib/msc/framework/tvm/relax_opcode.cc
- python/tvm/topi/nn/elemwise.py
- python/tvm/relax/transform/legalize_ops/
And change the codes in python/tvm/relax/op/nn/nn.py
return relax.op.clip(data, 0, 6)ce3596b to
fd67064
Compare
c538dcf to
fd6f428
Compare
|
agree with @Hzfengsy , we should simplify relax core op set, and not define relu6 as relax op |
…ph (apache#17918) * add relu6 op support into relax frontend * fix lint ssues * fix unity issue in test script * fix issues in msc test script * fix relu6 layout value in msc test script * define relu6 op in relax_opcode file * define relu6 op in torch codegen file * update relu6 op implementation using clip op * update test script --------- Co-authored-by: deivanayakisankaralingam <deiva@Deivanayaki> Co-authored-by: deivanayakisankaralingam <deiva@Deivanayaki.>
This PR adds support for the relu6 operation in exported program and fx graph translator torch frontend. By adding this support, the following torchvision models are now able to run successfully: