Platform | Status |
---|---|
Android | |
iOS | |
Linux | |
macOS | |
Web | |
Windows |
- CI builds for all platforms are now running on Codemagic..
- Google's Magika for file identification supported on all platforms.
- Example app including full voice assistant flow, with Whisper, Silero voice activity detection. Available at telosnex.github.io/fonnx/
- Whisper supported on all platforms, including web.
- Whisper now supported on all platforms besides web.
- Whisper models support timestamps. (not exposed via API, yet)
- Silero VAD added to all platforms besides web.
- Silero VAD enables detecting when the user is done speaking with a much higher success rate than relying on volume levels.
- Example contains
SttService
, an example of how to integrate the VAD and Whisper together with an easy to use interface. (Stream)
Run ML models natively on any platform. ONNX models can be run on iOS, Android, Web, Linux, Windows, and macOS.
FONNX is a Flutter library for running ONNX models. Flutter, and FONNX, run natively on iOS, Android, Web, Linux, Windows, and macOS. FONNX leverages ONNX to provide native acceleration capabilities, from CoreML on iOS, to Android Neural Networks API on Android, to WASM SIMD on Web. Most models can be easily converted to ONNX format, including models from Pytorch, Tensorflow, and more.
🤗 Hugging Face has a large collection of models, including many that are ONNX format. 90% of the models are Pytorch, which can be converted to ONNX.
Here is a search for ONNX models.
A command-line tool called optimum-cli
from HuggingFace converts Pytorch and Tensorflow models. This covers the vast majority of models. optimum-cli
can also quantize models, significantly reduce model size, usually with negligible impact on accuracy.
See official documentation or the
quick start snippet on GitHub.
Another tool that automates conversion to ONNX is HFOnnx. It was used to export the text embeddings models in this repo. Its advantages included a significantly smaller model size, and incorporating post-processing (pooling) into the model itself.
- Brief intro to how ONNX model format & runtime work huggingface.com
- Netron allows you to view ONNX models, inspect their runtime graph, and export them to other formats
These models generate embeddings for text.
An embedding is a vector of floating point numbers that represents the meaning of the text.
Embeddings are the foundation of a vector database, as well as retrieval augmented generation - deciding which text snippets to provide in the limited context window of an LLM like GPT.
Running locally using FONNX provides significant privacy benefits, as well as latency benefits. For example, rather than having to store the embedding and text of each chunk of a document on a server, they can be stored on-device. Both MiniLM L6 V2 and MSMARCO MiniLM L6 V3 are both the product of the Sentence Transformers project. Their website has excellent documentation explaining, for instance, semantic search
Trained on a billion sentence pairs from diverse sources, from Reddit to WikiAnswers to StackExchange.
MiniLM L6 V2 is well-suited for numerous tasks, from text classification to semantic search.
It is optimized for symmetric search, where text is roughly of the same length and meaning.
Input text is divided into approximately 200 words, and an embedding is generated for each.
🤗 Hugging Face
Trained on pairs of Bing search queries to web pages that contained answers for the query.
It is optimized for asymmetric semantic search, matching a search query to an answer.
Additionally, it has 2x the input size of MiniLM L6 V2: it can accept up to 400 words as input for one embedding.
🤗 Hugging Face
iPhone 14: 67 ms
Pixel Fold: 33 ms
macOS: 13 ms
WASM SIMD: 41 ms
Avg. ms for 1 Mini LM L6 V2 embedding / 200 words.
- Run on Thurs Oct 12th 2023.
- macOS and WASM-SIMD runs on MacBook Pro M2 Max.
- Average of 100 embeddings, after a warmup of 10.
- Input is mix of lorem ipsum text from 8 languages.
The ONNX C library is used for macOS, Windows, and Linux. Flutter can call into it via FFI. Nothing special is required to use FFI on these platforms.
iOS uses the official ONNX Objective-C library. No additional tasks besides adding FONNX to your Flutter project are required.
iOS build fails when linked against .dylib provided with ONNX releases. They are explicitly marked as for macOS.
Android uses the official ONNX Android dependencies from a Maven repository. Note that ProGuard rules are required to prevent the ONNX library from being stripped.
Sending these headers with the request for the ONNX JS package gives a 10x speedup:
Cross-Origin-Embedder-Policy: require-corp
Cross-Origin-Opener-Policy: same-origin
See this GitHub issue for details. TL;DR: It allows use of multiple threads by ONNX's WASM implementation by using a SharedArrayBuffer.
While developing, two issues prevent it work working on the web. Both have workarounds
You may see errors in console logs about the MIME type of the .wasm being incorrect and starting with the wrong bytes.
That is due to local Flutter serving of the web app.
To fix, download the WASM files from the same CDN folder that hosts ort.min.js (see __worker.js) and also in __minilm_worker.js, remove the // in front of ort.env.wasm.wasmPaths = "".
Then, place the WASM files downloaded from the CDN next to index.html.
In release mode and deployed, this is not an issue, you do not need to host the WASM files.
To safely use SharedArrayBuffer, the server must send the Cross-Origin-Embedder-Policy header with the value require-corp.
See here for how to workaround it: nagadomi/nunif#34
Note that the extension became adware, you should have Chrome set up its permissions such that it isn't run until you click it. Also, note that you have to do that each time the Flutter web app in debug mode's port changes.
FONNX is licensed under a dual-license model.
The code as-is on GitHub is licensed under GPL v2. That requires distribution of the integrating app's source code, and this is unlikely to be desirable for commercial entities. See LICENSE.md.
Commercial licenses are also available. Contact info@telosnex.com. Expect very fair terms: our intent is to charge only entities, with a launched app, making a lot of money, with FONNX as a core dependency. The base agreement is here: https://github.com/lawndoc/dual-license-templates/blob/main/pdf/Basic-Yearly.pdf