A lightweight, exportable addon allowing for loading and inferring from TensorFlow SavedModel format models from GDScript. Note that this project is still in development and has only been tested on MacOS.
The focus of the Godot TF Inference addon is not on creating a TensorFlow model but rather using a previously-created model from within Godot. To explore creating a ML model using Godot, please refer to the following projects.
Installation involves simply downloading and installing a zip file from Godot's UI. Recompilation of the engine is not required.
- Ensure you have a model in an uncompressed SavedModel format in your Godot project. This directory should contain .pb files and a variables subdirectory. The process of creating a TensorFlow model is beyond the scope of this project.
- Download the Godot TF Inference addon zip file relevant to your platform from the releases page.
- In Godot's Asset Library tab, click Import and select the addon zip file. Follow prompts to complete installation of the addon.
The following assumes you're using an autoload singleton named TFInferenceSingleton. This name is arbitrary but is recommended to be named to reflect the model(s) it is loading.
- Choose how to expose the Godot TF Inference addon
TFInference
class (autoload singleton or load). The recommended way of exposing an autoload singleton can be accomplished as follows.- In Godot's Project Settings menu, select AutoLoad.
- Add a new singleton by selecting addons/godot-tf-inference/TFInference.gdns as the path.
- Make note of the Node Name e.g. TFInferenceSingleton and click Add.
- Load your model via
TFInferenceSingleton.load_model("res://MY_MODEL")
. Note thatMY_MODEL
should be a TensorFlow SavedModel directory (see Installation). - Set TensorFlow model signature names via
TFInferenceSingleton.set_names('MY_INPUT', 'MY_OUTPUT')
. For more information on TensorFlow model signatures, see TensorFlow documentation. Note that you may have to explore the structure of your model to find such signature names. - Infer from your model via
TFInferenceSingleton.infer(['MY_INPUT_0', 'MY_INPUT_1'])
. The array argument of theinfer()
method will be used as a one-dimensional tensor input to your model. A one-dimensional array is returned representing the model's output tensor.
Examples can be found in the examples directory of this repository. So as to not clutter up the repository, examples do not have the Godot TF Inference addon installed. See Installation.
Subject to change after improving dependencies, export settings, etc.
- Disable library validation in the export template settings.
- Under the Resources tab of the export template settings, add your model directory to the non-resource export filters e.g. MY_MODEL/*.
This addon has not been tested on Linux. See Issue#1.
This addon has not been tested on Windows. See Issue#2.
This section is targeted at folks looking to work on the Godot TF Inference addon itself. To develop a Godot game using this addon, simply installing the addon will suffice.
These instructions are tailored to UNIX machines.
- Clone repo and submodules via
git clone --recurse-submodules https://github.com/ashtonmeuser/godot-tf-inference.git
. - Ensure the correct Godot submodule commits are checked out. Refer to relevant branch of the godot-cpp project e.g.
3.x
to verify submodule hashes. At the time of this writing, the hashes for the godot-cpp and godot-headers submodules were836676193031b706a9151f74959de7ae2fc1279b
and0f91de28a593670a9cbea3dd78163a31ddfcbff4
, respectively. - Download the TensorFlow C libraries for your platform and extract into a directory named libtensorflow2 at the root of this repository. There should be include and lib subdirectories within the libtensorflow2 directory.
- Install SConstruct via
pip install SCons
. SConstruct is what Godot uses to build Godot and generate C++ bindings. For convenience, we'll use the same tool to build the Godot TF Inference addon. - Compile the Godot C++ bindings. From within the godot-cpp directory, run
scons platform=PLATFORM generate_bindings=yes -j4
replacingPLATFORM
with your relevant platform type e.g.osx
,linux
,windows
, etc. To expedite this process, you may consider setting the-j
argument to the number of CPUs that your machine has. - Compile the Godot TF Inference addon. From the repository root directory, run
scons platform=PLATFORM
once again replacingPLATFORM
with your platform. This will create the addons/godot-tf-inference/bin/PLATFORM directory wherePLATFORM
is your platform. You should see a dynamic library (.dylib, .so, .dll, etc.) created within this directory. - Copy the TensorFlow C dynamic libraries to the appropriate platform directory via
cp -RP libtensorflow2/lib/. addons/godot-tf-inference/bin/PLATFORM/
replacingPLATFORM
with your platform. - Zip the addons directory via
zip -FSr addons.zip addons
. This allows the addon to be conveniently distributed and imported into Godot. This zip file can be imported directly into Godot (see Installation).
If frequently iterating on the addon using a Godot project, it may help to push the compiled dynamic library directly into your Godot project after every build. This can only be done if the addon has previously been installed in your Godot project. From the repository root directory, run scons platform=PLATFORM && cp addons/godot-tf-inference/bin/PLATFORM/libgodot-tf-inference.dylib MY_GODOT_PROJECT/addons/godot-tf-inference/bin/PLATFORM/
replacing PLATFORM
with your platform.
Please feel free submit a PR or an issue.
- Load model from GDScript
- Specify signature names from GDScript
- Infer from model from GDScript
- Simple example Godot project
- Exportable
- Return errors
- Arbitrary tensor shape
- Decent documentation
- Starter GitHub issues
- Resolve Godot
res://
paths for model - Windows support
- Linux support
- Cart-pole example
- Godot 4.x support (GDExtension)
- Type checking and handling type errors
- Extract model signature definitions
- Use extracted signature names by default
- Validate input tensor against signature defs
- ONNX support