This example shows:
- How to create an InferenceSession
- Load a simple ONNX model which has one Operator: Add
- Create two random Tensors of given shape
- Run the inference using the inputs
- Get output Tensor back
- Access raw data in the Tensor
- Match the results against the exptected values
-
Download model file
add.onnx
from examples/models and put it in the current folder. -
Start an http server in this folder. You can install
http-server
vianpm install http-server -g
Then start an http server by running
http-server . -c-1 -p 3000
This will start the local http server with disabled cache and listens on port 3000
-
Open up the browser and access this URL: http://localhost:3000/
-
Click on Run button to see results of the inference run
-
index.html
The HTML file to render the UI in browser
-
index.js
The main .js file that holds all
ONNX.js
logic to load and execute the model. -
add.onnx
A simple ONNX model file that contains only one
Add
operator.