tensorflow_cpp is a header-only library that provides helpful wrappers around the TensorFlow C++ API, allowing you to easily load, inspect, and run saved models and frozen graphs in C++. The library is easy to integrate into CMake projects, but is also available as a ROS and ROS2 package.
Important
This repository is open-sourced and maintained by the Institute for Automotive Engineering (ika) at RWTH Aachen University.
Deep Learning is one of many research topics within our Vehicle Intelligence & Automated Driving domain.
If you would like to learn more about how we can support your deep learning or automated driving efforts, feel free to reach out to us!
Timo Woopen - Manager Research Area Vehicle Intelligence & Automated Driving
+49 241 80 23549
timo.woopen@ika.rwth-aachen.de
If you are looking for an easy way to install the TensorFlow C++ API, we suggest that you also check out our repository libtensorflow_cc. There, we provide a pre-built library and a Docker image for easy installation and usage of the TensorFlow C++ API.
Loading and running a single-input/single-output model
#include <iostream>
#include <string>
#include <vector>
#include <tensorflow/core/framework/tensor.h>
#include <tensorflow_cpp/model.h>
// load single-input/single-output model
std::string model_path = "/PATH/TO/MODEL";
tensorflow_cpp::Model model;
model.loadModel(model_path);
// log model info
std::cout << model.getInfoString() << std::endl;
// get input/output shape/type, if required
std::vector<int> input_shape = model.getInputShape();
tensorflow::DataType output_type = model.getOutputType();
// ... do something ...
// create and fill input tensor
tensorflow::Tensor input_tensor;
// ... fill input tensor ...
// run model
tensorflow::Tensor output_tensor = model(input_tensor);
Loading and running a multi-input/multi-output model
#include <iostream>
#include <string>
#include <vector>
#include <tensorflow/core/framework/tensor.h>
#include <tensorflow_cpp/model.h>
// load multi-input/multi-output model
std::string model_path = "/PATH/TO/MODEL";
tensorflow_cpp::Model model;
model.loadModel(model_path);
// log model info
std::cout << model.getInfoString() << std::endl;
// input/output layer names are determined automatically,
// but could potentially have different order than expected
// get input/output shapes/types, if required
std::vector<int> input_shape_1 = model.getNodeShapes()[0];
tensorflow::DataType output_type_2 = model.getNodeTypes()[1];
// ... do something ...
// create and fill input tensors
tensorflow::Tensor input_tensor_1;
tensorflow::Tensor input_tensor_2;
// ... fill input tensors ...
// run model
auto outputs = model({input_tensor_1, input_tensor_2});
tensorflow::Tensor output_tensor_1& = outputs[0];
tensorflow::Tensor output_tensor_2& = outputs[1];
Loading and running a multi-input/multi-output model with specific inputs/outputs
#include <iostream>
#include <string>
#include <vector>
#include <tensorflow/core/framework/tensor.h>
#include <tensorflow_cpp/model.h>
// load multi-input/multi-output model
std::string model_path = "/PATH/TO/MODEL";
tensorflow_cpp::Model model;
model.loadModel(model_path);
// log model info
std::cout << model.getInfoString() << std::endl;
// set model input/output layer names (see `model.logInfo()`)
const std::string kModelInputName1 = "input1";
const std::string kModelInputName2 = "input2";
const std::string kModelOutputName1 = "output1";
const std::string kModelOutputName2 = "output2";
// get input/output shapes/types, if required
std::vector<int> input_shape_1 = model.getNodeShape(kModelInputName1);
tensorflow::DataType output_type_2 = model.getNodeType(kModelOutputName2);
// ... do something ...
// create and fill input tensors
tensorflow::Tensor input_tensor_1;
tensorflow::Tensor input_tensor_2;
// ... fill input tensors ...
// run model
auto outputs = model({{kModelInputName1, input_tensor_1}, {kModelInputName2, input_tensor_2}}, {kModelOutputName1, kModelOutputName2});
tensorflow::Tensor output_tensor_1& = outputs[kModelOutputName1];
tensorflow::Tensor output_tensor_2& = outputs[kModelOutputName2];
tensorflow_cpp is a wrapper around the official TensorFlow C++ API. The C++ API including libtensorflow_cc.so
must be installed on the system.
Instead of having to build the C++ API from source yourself, we recommend to check out our repository libtensorflow_cc. There, we provide a pre-built library and a Docker image for easy installation and usage of the TensorFlow C++ API.
Installation is as easy as the following. Head over to libtensorflow_cc for more details.
ARCH=$(dpkg --print-architecture)
wget https://github.com/ika-rwth-aachen/libtensorflow_cc/releases/download/v2.9.2/libtensorflow-cc_2.9.2-gpu_${ARCH}.deb
sudo dpkg -i libtensorflow-cc_2.9.2-gpu_${ARCH}.deb
ldconfig
If you have already installed the C++ API another way, you can use the provided TensorFlowConfig.cmake
to enable the find_package(TensorFlow REQUIRED)
call in tensorflow_cpp's CMakeLists.txt
.
-
Clone this repository.
git clone https://github.com/ika-rwth-aachen/tensorflow_cpp.git cd tensorflow_cpp
-
Install tensorflow_cpp system-wide.
# tensorflow_cpp$ mkdir -p build cd build cmake .. sudo make install
-
Use
find_package()
to locate and integrate tensorflow_cpp into your CMake project. See the CMake example project.# CMakeLists.txt find_package(tensorflow_cpp REQUIRED) # ... add_executable(foo ...) # / add_library(foo ...) # ... target_link_libraries(foo tensorflow_cpp)
-
Clone this repository into your ROS/ROS2 workspace.
git clone https://github.com/ika-rwth-aachen/tensorflow_cpp.git cd tensorflow_cpp
-
In order to include tensorflow_cpp in a ROS/ROS2 package, specify the dependency in its
package.xml
and usefind_package()
in your package'sCMakeLists.txt
.<!-- package.xml --> <depend>tensorflow_cpp</depend>
# CMakeLists.txt # ROS find_package(catkin REQUIRED COMPONENTS tensorflow_cpp ) # ROS2 find_package(tensorflow_cpp REQUIRED) ament_target_dependencies(<TARGET> tensorflow_cpp)
In order to build and run the test cases defined in tests/
, execute the following.
# tensorflow_cpp$
mkdir -p build
cd build
cmake -DBUILD_TESTING=ON ..
make
ctest
Click here to be taken to the full API documentation.
The documentation can be generated by running Doxygen.
# tensorflow_cpp/doc$
doxygen
This work is accomplished within the projects 6GEM (FKZ 16KISK038) and UNICARagil (FKZ 16EMO0284K). We acknowledge the financial support for the projects by the Federal Ministry of Education and Research of Germany (BMBF).
This repository is not endorsed by or otherwise affiliated with TensorFlow or Google. TensorFlow, the TensorFlow logo and any related marks are trademarks of Google Inc. TensorFlow is released under the Apache License 2.0.