Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[TUZ-6] Add a direct Onnx to Relax Importer (apache#14)
* Initial importer and testing scaffolding. * Implement matmul operator and tests. * Add a bunch of new operators. * Add new ops * [Relax][Onnx] Implement Div, Sigmoid, Softmax, Transpose and Unsqueeze ops * skip test_reshape * [Relax][ONNX] Implement BiasGelu and Gelu ops * [Relax][ONNX] Implement Where op * [Relax][ONNX] Add Multiple ONNX Frontend Support for Clip / Equal / Shape / Not / Tanh (#3) * Rebase w/ Equal, Not, Tanh, Sqrt, Relu, Clip, Conv, Pow, Erf. * Fix cumsum but still needs work. * Fix initializer for CumSum. (#9) * Add Constant, Squeeze & Sub (#10) * Add squeeze. * Add Constant. * Add sub. * Support reusing Relay ONNX operator convertors in the Relax ONNX frontend (#8) * [WIP] Support using Relay ops in the Relax ONNX frontend Co-authored-by: Matthew Barrett <mbarrett@octoml.ai> Co-authored-by: Michalis Papadimitriou <mpapadimitriou@octoml.ai> * [WIP] small fixes Co-authored-by: Matthew Barrett <mbarrett@octoml.ai> Co-authored-by: Michalis Papadimitriou <mpapadimitriou@octoml.ai> * [WIP] Support dynamic matmul and reshape Co-authored-by: Matthew Barrett <mbarrett@octoml.ai> Co-authored-by: Michalis Papadimitriou <mpapadimitriou@octoml.ai> * Address PR comments --------- Co-authored-by: Matthew Barrett <mbarrett@octoml.ai> Co-authored-by: Michalis Papadimitriou <mpapadimitriou@octoml.ai> * Add more ops (including all Reduce ops) using the relay frontend (apache#11) * [WIP] add more ops. Some fail at the moment * skip some tests * Remove duplicate tests for squeeze * Add Split op in the Relax ONNX frontend (apache#12) * [Relax][ONNX] Add Split op * Remove tmp * Fix layer normalizations and Shape operator. * Replace main loop with tvm testing. * Simplify Slice for opset 13. * [Relax][ONNX] Implement pad op * Incorporate pad op, add static constantofshape op. * Changes to shape to temporarily enable constantofshape in our models. * Add initial tensor_to_shape implementation. * Implemented dynamic broadcast_to to support expand and constantofshape. * Changes sufficient for vortex end to end run. * Formatting. * Format tests. * Re-add broadcast_to shape checking. * Fix formatting. * Remove overly strict manipulate check. * Fix typing * [Relax][Onnx] Implement Tile operator * Switch to native relax attention importer. * Address some of the PR comments * Check for the imported model IR version * switch from torch to numpy due to some incompatibility * Fix make format. * Clean up typing issues. * Clarify variable name. * Remove unneeded comprehension. * Remove circular dependency. * Add name sanitization for inputs * Disable reshape rewrite pass until fixed. * Fix long comment * Update cpu image. --------- Co-authored-by: Florin Blanaru <fblanaru@octoml.ai> Co-authored-by: Xiyou Zhou <xiyou@octoml.ai> Co-authored-by: Matthew Barrett <mbarrett@octoml.ai> Co-authored-by: Michalis Papadimitriou <mpapadimitriou@octoml.ai> Co-authored-by: Florin Blanaru <florin.blanaru96@gmail.com> Co-authored-by: sung <sunggg@umich.edu>
- Loading branch information