Utilities for generating small machine-learning assets (models and test data) that can be embedded in C/C++ inference demos or benchmarks (e.g., TFLite Micro on Zephyr). The current focus is a simple MNIST MLP exported to TFLite along with sample inputs.
models_gen/litert/train_mnist_model.py— trains a Keras MLP on MNIST, exports a TensorFlowSavedModel, converts to TFLite, and emits a C array (mnist_mlp_model_data.cc) viaxxd -i.models_gen/litert/gen_mnist_data.py— pulls MNIST test samples with TensorFlow Datasets and writes normalized, flattened inputs and labels togen_data/mnist/mnist_data.ccfor direct inclusion in C/C++.saved_models/— generated exports (TensorFlow + TFLite); ignored by git.gen_data/— generated C/C++-friendly datasets.oot_executorch/— placeholder for out-of-tree ExecuTorch/Zephyr integration.models_gen/cifar,models_gen/mnist— placeholders for future generators.requirements.txt— pinned Python dependencies; use a venv (venv/is git-ignored).
python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txtThe scripts download MNIST via TensorFlow Datasets on first run, so you need network access once.
python models_gen/litert/train_mnist_model.pyOutputs:
saved_models/tensorflow/mnist_mlp— TensorFlowSavedModelcheckpoint.saved_models/tflite/mnist_mlp/mnist_mlp.tflite— flatbuffer model.saved_models/tflite/mnist_mlp/mnist_mlp_model_data.cc— byte array produced byxxd -ifor embedding; rename or wrap as needed for your build.
python models_gen/litert/gen_mnist_data.pyOutputs:
gen_data/mnist/mnist_data.cc—kMnistInputs[50][784]of normalized floats andkMnistLabels[50]ofuint8labels. Adjustnum_samplesin the script to change the batch size.
For running zephyr example consider consulting with README.md in zephyr derectory
- Generated assets live under
saved_models/andgen_data/; clear them if you want a clean re-run. - If you need deterministic runs, set
TF_DETERMINISTIC_OPS=1and seed TensorFlow/NumPy before training.
2.1M model.cpp.obj4.4M app.dir/vexiiriscv FPGA
RAW: cycles=82024722 ns=820247220 ms=820
Invoke ms: last=820.25 mean=837.14 median=836.93 std=9.42 count=10
tensor type=float32
tensor shape=[1, 10] = [[-5.217735, 15.625002, 1.797457, -7.100753, -2.638395, -7.158819, -4.639622, -1.895034, 1.064150,
-12.718052]]QUEMU - timings are inaccurate!
RAW: cycles=49150427 ns=4915042700 ms=4915
Invoke ms: last=4915.04 mean=5021.67 median=5017.37 std=50.62 count=50
Label=1
tensor type=float32
tensor shape=[1, 10] = [[-4.540731, 15.719636, 2.680330, -7.934160, -3.605640, -4.153226, 2.169691, 1.782985, 1.717447,
-20.352581]]In emlearn model is a header file, so it's imported in main_functions obj file.
2.4M main_functions.cpp.obj2.7M app.dir/QUEMU - timings are inaccurate!
Invoke ms: last=4948.55 mean=5028.82 median=5017.46 std=59.55 count=50