Here you will find various samples, tutorials, and reference implementations for using ONNX Runtime. For a list of available dockerfiles and published images to help with getting started, see this page.
Inference only
- Basic Model Inferencing (single node Sigmoid) on CPU
- Model Inferencing (Resnet50) on CPU
- Model Inferencing on CPU using ONNX-Ecosystem Docker image
- Model Inferencing on CPU using ONNX Runtime Server (SSD Single Shot MultiBox Detector)
- Model Inferencing using NUPHAR Execution Provider
Inference with model conversion
Inference and deploy through AzureML
-
Inferencing on CPU using ONNX Model Zoo models:
-
Inferencing on CPU with model conversion step for existing models:
-
Inferencing on CPU with PyTorch model training:
For aditional information on training in AzureML, please see AzureML Training Notebooks
-
Inferencing on GPU with TensorRT Execution Provider (AKS)
Inference and Deploy wtih Azure IoT Edge
Other