- Introduction
- Which Repo do I need to fiddle with?
- Using this repository
- Productized Applications
- Release History
The Tiny ML Tensorlab repository is meant to be as a starting point to install and explore TI's AI offering for MCUs. It helps to install all the required repositories to get started. Currently, it can handle Time series Classification, Regression and Anomaly Detection tasks.
Once you clone this repository, you will find the following repositories present within the tinyml-tensorlab
directory:
tinyml-tensorlab
: This repo, serves as a blank wrapper for customers to clone all the tinyml repos at one shot.
The other repositories are here for a purpose:
tinyml-modelmaker
: Based on user configuration (yaml files), stitches a flow with relevant scripts to call from tinyverse. This stitches the scripts into a flow of data loading/training/compilation- This is your home repo. Most of your work will be taking place from this directory.
tinyml-tinyverse
: Individual scripts to load data, do preprocessing, AI training, compilation(using NNC/TVM)tinyml-modeloptimization
: Model optimization toolkit that is necessary for quantization for 2bit/4bit/8bit weights in QAT(Quantization Aware Training)/PTQ(Post Training Quantization) flows for TI devices with or without NPU.- As a customer developing models/flows, it is highly likely that you would not have to edit files in this repo
tinyml-mlbackend
(TI internal only): Serves as a wrapper on modelmaker to suit the needs of Edge AI Studio Model Composer only. Docker image is generated using this repo.
User Intent | Criteria 1 | Criteria 2 | tinyml-modelmaker | tinyml-tinyverse | tinyml-modeloptimization |
---|---|---|---|---|---|
BYOD |
|
|
✅ - edit config_*.yaml files. refer this to understand the config file. | ❌ | ✅ |
BYOD |
|
|
✅ - Refer this doc | ❌ | ❌ |
BYOM |
|
|
✅ - Refer this doc | ✅ | ❌ |
BYOM |
|
|
✅ - Refer this doc to understand editing the config file | ❌ | ❌ |
BYOM |
|
|
❌ | ❌ | - Refer this example.
|
- A lot more READMEs are present under the Tiny ML Modelmaker Repo
To begin with, you can use the repo as a developer
or user
.
-
Python Environment
* **Note**: Irrespective of being a `Linux` or a `Windows` user, it is ideal to use virtual environments on Python rather than operating without one. * For `Linux` we are using `Pyenv` as a Python version management system. * For `Windows` we show below using pyenv-win and also using Python's native `venv`-
Linux OS
- Follow https://github.com/pyenv/pyenv?tab=readme-ov-file#a-getting-pyenv to install pyenv
- Use Python 3.10.xx
pyenv local <python_version>
is recommended. The version given will be used whenever python is called from within this folder.
-
Windows OS
- Follow steps 1-5 from here using any Python3.10.xx: https://github.com/pyenv-win/pyenv-win?tab=readme-ov-file#quick-start
- Instead of step 6,
pyenv local <python_version>
is recommended. The version given will be used whenever python is called from within this folder.
- Install Python3.10 from https://www.python.org/downloads/
python -m venv py310 .\py310\Scripts\activate
-
-
NOTE: C2000 Customers:
- Please download and install TI C2000 Codegen Tools (TI C2000 CGT)
- Please set the installed path in your terminal:
- Linux:
export CGT_PATH="/path/to/ti-cgt-c2000_22.6.1.LTS"
- Windows:
$env:CGT_PATH="C:\path\to\wherever\present\ti-cgt-c2000_22.6.1.LTS"
- Linux:
- Please set the installed path in your terminal:
- Please download and install C2000Ware
- Please set the installed path in your terminal:
- Linux:
export C2000WARE_PATH="/path/to/C2000Ware_5_04_00_00"
- Windows:
$env:C2000WARE_PATH="C:\path\to\wherever\present\C2000Ware_5_04_00_00\"
- Linux:
- Please set the installed path in your terminal:
- Please download and install TI C2000 Codegen Tools (TI C2000 CGT)
-
NOTE: MSPM0 Customers:
- Please download and install TI Arm Codegen Tools (TI Arm CGT Clang)
- Please set the installed path in your terminal:
- Linux:
export MSPM0_CGT_PATH="/path/to/ti-cgt-armllvm_4.0.3.LTS"
- Windows:
$env:MSPM0_CGT_PATH="C:\path\to\wherever\present\ti-cgt-armllvm_4.0.3.LTS"
- Linux:
- Please set the installed path in your terminal:
- Please download and install MSPM0 SDK
- Please set the installed path in your terminal:
- Linux:
export M0SDK_PATH="/path/to/mspm0_sdk_2_05_00_05"
- Windows:
$env:M0SDK_PATH="C:\path\to\wherever\present\mspm0_sdk_2_05_00_05\"
- Linux:
- Please set the installed path in your terminal:
- Please download and install TI Arm Codegen Tools (TI Arm CGT Clang)
-
-
As a
user
- The installation and usage is very simple. It is just apip install
. But beware that you will not be able to modify any of the features or customize AI models/transforms for your use case -
Install this repository as a Python package:
pip install git+https://github.com/TexasInstruments/tinyml-tensorlab.git@main#subdirectory=tinyml-modelmaker
It is as simple as:
import tinyml_modelmaker tinyml_modelmaker.get_set_go(config)
- Several examples of configs are present (Check *.yaml files at
tinyml-modelmaker
repository)- You can load one like this:
-
import yaml with open('config_timeseries_classification_dsk.yaml') as fp: config = yaml.safe_load(fp)
- This method still expects the C2000Ware/MSPM0 SDK to be installed by the user separately and is not automatically installed.
- This method still expects the TI-CGT/ TI Arm-Clang to be installed by the user separately and is not automatically installed.
- Proceeding without installing these SDKs will result in a trained model for the dataset, but will not compile the ONNX model to an compiled artifact.
- Several examples of configs are present (Check *.yaml files at
-
-
-
As a
developer
- The installation will use your brain power (although a tiny bit), but allows you to customize with unimaginable power! -
Linux OS
- Clone this repository
cd tinyml-tensorlab/tinyml-modelmaker
- Execute:
./setup_all.sh
- Run the following (to install local repositories, ideal for developers):
cd ../tinyml-tinyverse pip install -e . cd tinyml-modeloptimization/torchmodelopt pip install -e . cd ../tinyml-modelmaker
- Now you're ready to go!
run_tinyml_modelmaker.sh F28P55 config_timeseries_classification_dsk.yaml
Windows OS
-
Although we use Pyenv for Python version management on Linux, the same offering for Windows isn't so stable. So even the native venv is good enough.
- It is highly recommended to use PowerShell instead of cmd.exe/Command Terminal
- If you prefer to use Windows Subsystem for Linux, then a user guide to use this toolchain on Windows Subsystem for Linux has been provided.
-
Step 1.1: Clone this repository from GitHub
-
Step 1.2: Let us ready up the depedencies
cd tinyml-tensorlab python -m ensurepip --upgrade python -m pip install --no-input --upgrade pip setuptools wheel
- Step 1.3: Install Tiny ML Modelmaker
cd ..\tinyml-modelmaker python -m pip install --no-input -r requirements.txt python -m pip install --editable . # --use-pep517
- Tiny ML Modelmaker, by default installs Tiny ML Tinyverse and Tiny ML ModelOptimization repositories as a python package.
- If you intend to use this repository as is, then it is enough.
- However, if you intend to create models and play with the quantization varieties, then it is better to separately clone
- Step 1.4: Installing tinyverse
cd ..\tinyml-tinyverse python -m pip install --no-input -r requirements\requirements.txt python -m pip install --no-input -r requirements\requirements_ti_packages.txt python -m pip install --editable .
- Step 1.5: Installing model optimization toolkit
cd ..\tinyml-modeloptimization\torchmodelopt python -m pip install --no-input -r requirements\requirements.txt python -m pip install --editable .
- We can run it now!
cd ..\tinyml-modelmaker python .\scripts\run_tinyml_modelmaker.py .\config_timeseries_classification_dsk.yaml --target_device F28P55
-
-
Since these repositories are undergoing a massive feature addition stage, it is recommended to keep your codes up to date by running the following command:
git_pull_all.sh
Select sector (Industrial, automotive, personal electronics) | Technology | Application (Title) | Application Description | Features / advantages | Call to action |
---|---|---|---|---|---|
Industrial | Time series | Arc fault detection | An arc fault is an electrical discharge that occurs when an electrical current flows through an unintended path, often due to damaged, frayed, or improperly installed wiring. This can produce heat and sparks, which can ignite surrounding materials, leading to fires. Due to the ability of AI to analyze complex patterns, continuously learn and improve from new data, address a wide range of faults, it is advantageous to use AI. Using AI at the edge empowers the customer with reduced latency, enhanced privacy and scalability while saving bandwidth. TI provides UL-1699B tested AI models which have impeccable accuracy and ultra low latency. |
By utilising benefits of AI such as its ability to analyze patterns in signals and ability to handle larger volumes of data, TI's solution allows for immediate detection response to arc faults. Coupled with an NPU that provides enhanced AI performance, TI's brings additional benefits in terms of speed, reliability, and scalability, making it a powerful approach for enhancing electrical safety. With TI's complete solution, AFD will never be a showstopper for you. |
To empower your solution with TI’s AI, you can use the Model Composer GUI to quickly train an AI model or use the Tiny ML Modelmaker for an advanced set of capabilities. To customers who rely on their own AI training framework, TI’s Neural Network Compiler can help you get your AI model compatible with MCUs (P55x,P66x or any other F28 device). For a full-fledged reference solution, find the comprehensive project here. |
Industrial | Time series | Motor Bearing Fault Detection | Motor bearing faults are often seen in HVAC systems with rotating parts. Itoccurs due to the wear and tear of moving parts, lack of lubrication, and due to overloading of equipment. It adversely affects the motor lifespan and increases energy consumption, potentially even can cause a failure of the system. By using AI, these faults can be detected early by monitoring signs such as subtle changes in vibration patterns. Processing such data locally at the HVAC system can provide real time fault detection and immediate response, which is crucial for preventing damage and ensuring continuous operation. TI provides handcrafted AI models which have impeccable accuracy and ultra low latency. |
TI's AI solution addresses these by montoring the vibration and temperature of the motor through sensors and provides a reliable solution by combining the strengths of advanced analytics and real-time processing, leading to more reliable, efficient, and cost-effective maintenance and operation. Put together with an NPU for advanced AI performance capabilities, this prevents unexpected failures as the algorithms can detect early signs of faults that might not be noticeable through manual inspections |
To empower your solution with TI’s AI, you can use the Model Composer GUI to quickly train an AI model or use the Tiny ML Modelmaker for an advanced set of capabilities. To customers who rely on their own AI training framework, TI’s Neural Network Compiler can help you get your AI model compatible with MCUs (P55x,P66x or any other F28 device). For a full-fledged reference solution, find the comprehensive project here. |
- To empower your solution with TI’s AI, you can use the Tiny ML Modelmaker for an advanced set of capabilities.
- Supports any Time series Classification tasks (including Arc Fault and Motor Bearing Fault Classification)
- You can also use the Edge AI Studio Model Composer GUI to quickly train an AI model (No Code Platform)
- This supports only Arc Fault and Motor Bearing Fault Classification applications currently.
- To customers who rely on their own AI training framework, TI’s Neural Network Compiler can help you get your AI model compatible with MCUs (P55x,P66x or any other F28 device).
- For a full-fledged reference solution on Arc Fault and Motor Bearing Fault, find the comprehensive project in Digital Power SDK and Motor Control SDK.
- [2025-Apr] Major feature updates (version 1.0.0) of the software
- General:
- Tiny ML Modelmaker is now a pip installable package!
- Existing models can be modified on the fly through a config file (check Tiny ML Modelmaker docs)
- MPS (Metal Performance Shaders) backend support for Mac host devices!
- Technology:
- PTQ and QAT flows supported in tinyml-modelmaker, tinyml-modeloptimization
- Ternary, 4 bit Quantization support in tinyml-modelmaker
- Flows:
- Regression ML tasks supported
- Autoencoder based Anomaly Detection task supported
- Feature Extraction:
- Feature Extraction transforms are now modular and compatible with C2000Ware 5.05 only
- Supports Haar and Hadamard Transform
- Golden test vectors file has one set uncommented by default to work OOB
- Data Visualisation:
- Multiclass ROC-AUC graphs are autogenerated for better explainability of reports and help select thresholds based on false alarm/ sensitivity preference
- PCA graphs are auto plotted for feature extracted data → Helps in identifying if the feature extraction actually helped
- Run now begins with displaying inference time, sram usage and flash usage for all the devices for any model.
- Dataset
- Goodness of Fit of dataset now enabled.
- Dataset can be split into train-test-val on a file-by-file basis or within-a-file basis
- Extensive Documentation & Know-How Examples to use Modelmaker
- General:
- [2024-November] Updated (version 0.9.0) of the software
- [2024-August] Release version 0.8.0 of the software
- [2024-July] Release version 0.7.0 of the software
- [2024-June] Release version 0.6.0 of the software
- [2024-May] First release (version 0.5.0) of the software