It's here!
This targets wasm32-wasi
for Fastly's Compute@Edge. It uses an extrinsic tool, wasm-opt
, to squeeze - among other things – an ML inference engine into a 35MB-ish wasm binary.
This demo showcases image classification using a top-tier MobileNetV2 checkpoint. Owing to the flexibility of tract
under the hood, the TensorFlow Lite model deployed can be swapped for another, including open interchange formats (ONNX / NNEF).
This demo was created to push the boundaries of the platform and inspire new ideas.
Using the Fastly CLI, publish the root package and note the [funky-domain].edgecompute.app
:
fastly compute publish
Update L54 in docs/script.js
to [funky-domain].edgecompute.app
you just noted, and publish the static demo site separately:
cd static-host
fastly compute publish