This document serves as an index for onnx-mlir documents.
- Installation is covered by README.md.
- Workflow.md describes how to contribute in github environment.
- This guideline is used to keep documentation and code consistent.
- Onnx operation are represented with ONNX dialect in onnx-mlir.
- This document tell you how to generate an ONNX operation into ONNX dialect.
- After an ONNX model is imported into onnx-mlir, several graph-level transformations will be applied. These transformations include operation decomposition, constant propagation, shape inference, and canonicalization.
- Then the ONNX dialect is lowered to Krnl dialect. To help debugging and performance tuning, onnx-mlir supports instrumentation at the ONNX operand level.
- All the passes may be controlled with options.
- How to handle errors can be found here.
The compiled ONNX model can be executed with either [c/c++ driver](document missing) or python driver. The routine testing for onnx-mlir build is describe in this document.