The Open Model Zoo demo applications are console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person's physical attributes, such as age, gender, and emotional state
For the Intel® Distribution of OpenVINO™ toolkit, the demos are available after installation in the following directory: <INSTALL_DIR>/deployment_tools/open_model_zoo/demos
.
The demos can also be obtained from the Open Model Zoo GitHub repository.
C++, C++ G-API and Python versions are located in the cpp
, cpp_gapi
and python
subdirectories respectively.
The Open Model Zoo includes the following demos:
- 3D Human Pose Estimation Python* Demo - 3D human pose estimation demo.
- Action Recognition Python* Demo - Demo application for Action Recognition algorithm, which classifies actions that are being performed on input video.
- BERT Question Answering Python* Demo
- BERT Question Answering Embedding Python* Demo - The demo demonstrates how to run BERT based models for question answering task.
- Classification C++ Demo - Shows an example of using neural networks for image classification.
- Colorization Python* Demo - Colorization demo colorizes input frames.
- Crossroad Camera C++ Demo - Person Detection followed by the Person Attributes Recognition and Person Reidentification Retail, supports images/video and camera inputs.
- Deblurring Python* Demo - Demo for deblurring the input images.
- Face Detection MTCNN Python* Demo - The demo demonstrates how to run MTCNN face detection model to detect faces on images.
- Formula Recognition Python* Demo - The demo demonstrates how to run Im2latex formula recognition models and recognize latex formulas.
- Gaze Estimation C++ Demo - Face detection followed by gaze estimation, head pose estimation and facial landmarks regression.
- Gesture Recognition Python* Demo - Demo application for Gesture Recognition algorithm (e.g. American Sign Language gestures), which classifies gesture actions that are being performed on input video.
- Handwritten Text Recognition Python* Demo - The demo demonstrates how to run Handwritten Japanese Recognition models and Handwritten Simplified Chinese Recognition models.
- Human Pose Estimation C++ Demo - Human pose estimation demo.
- Human Pose Estimation Python* Demo - Human pose estimation demo.
- Image Inpainting Python* Demo - Demo application for GMCNN inpainting network.
- Image Retrieval Python* Demo - The demo demonstrates how to run Image Retrieval models using OpenVINO™.
- Image Segmentation C++ Demo - Inference of semantic segmentation networks (supports video and camera inputs).
- Image Segmentation Python* Demo - Inference of semantic segmentation networks (supports video and camera inputs).
- Image Translation Python* Demo - Demo application to synthesize a photo-realistic image based on exemplar image.
- Instance Segmentation Python* Demo - Inference of instance segmentation networks trained in
Detectron
ormaskrcnn-benchmark
. - Interactive Face Detection C++ Demo - Face Detection coupled with Age/Gender, Head-Pose, Emotion, and Facial Landmarks detectors. Supports video and camera inputs.
- Interactive Face Detection G-API Demo - G-API based Face Detection coupled with Age/Gender, Head-Pose, Emotion, and Facial Landmarks detectors. Supports video and camera inputs.
- Machine Translation Python* Demo - The demo demonstrates how to run non-autoregressive machine translation models.
- Mask R-CNN C++ Demo for TensorFlow* Object Detection API - Inference of instance segmentation networks created with TensorFlow* Object Detection API.
- Monodepth Python* Demo - The demo demonstrates how to run monocular depth estimation models.
- Multi-Camera Multi-Target Tracking Python* Demo Demo application for multiple targets (persons or vehicles) tracking on multiple cameras.
- Multi-Channel Face Detection C++ Demo - The demo demonstrates an inference pipeline for multi-channel face detection scenario.
- Multi-Channel Human Pose Estimation C++ Demo - The demo demonstrates an inference pipeline for multi-channel human pose estimation scenario.
- Multi-Channel Object Detection Yolov3 C++ Demo - The demo demonstrates an inference pipeline for multi-channel common object detection scenario.
- Object Detection Python* Demo - Demo application for several object detection model types (like SSD, Yolo, etc).
- Object Detection C++ Demo - Demo application for Object Detection networks (different models architectures are supported), async API showcase, simple OpenCV interoperability (supports video and camera inputs).
- Pedestrian Tracker C++ Demo - Demo application for pedestrian tracking scenario.
- Security Barrier Camera C++ Demo - Vehicle Detection followed by the Vehicle Attributes and License-Plate Recognition, supports images/video and camera inputs.
- Speech Recognition Python* Demo - Speech recognition demo: takes audio file with an English phrase on input, and converts it into text.
- Single Human Pose Estimation Python* Demo - 2D human pose estimation demo.
- Smart Classroom C++ Demo - Face recognition and action detection demo for classroom environment.
- Sound Classification Python* Demo - Demo application for sound classification algorithm.
- Super Resolution C++ Demo - Super Resolution demo (the demo supports only images as inputs). It enhances the resolution of the input image.
- Text Detection C++ Demo - Text Detection demo. It detects and recognizes multi-oriented scene text on an input image and puts a bounding box around detected area.
- Text Spotting Python* Demo - The demo demonstrates how to run Text Spotting models.
- Text-to-speech Python* Demo - Shows an example of using Forward Tacotron and WaveRNN neural networks for text to speech task.
To run the demo applications, you can use images and videos from the media files collection available at https://github.com/intel-iot-devkit/sample-videos.
NOTE: Inference Engine HDDL and FPGA plugins are available in proprietary distribution only.
You can download the pre-trained models using the OpenVINO Model Downloader or from https://download.01.org/opencv/.
The table below shows the correlation between models, demos, and supported plugins. The plugins names are exactly as they are passed to the demos with -d
option. The correlation between the plugins and supported devices see in the Supported Devices section.
NOTE: MYRIAD below stands for Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ Vision Processing Units.
Model | Demos supported on the model | CPU | GPU | MYRIAD/HDDL | HETERO:FPGA,CPU |
---|---|---|---|---|---|
action-recognition-0001-decoder | Action Recognition Python* Demo | Supported | Supported | Supported | |
action-recognition-0001-encoder | Action Recognition Python* Demo | Supported | Supported | Supported | |
age-gender-recognition-retail-0013 | Interactive Face Detection Demo | Supported | Supported | Supported | Supported |
asl-recognition-0004 | Gesture Recognition Python* Demo | Supported | Supported | ||
driver-action-recognition-adas-0002-decoder | Action Recognition Python* Demo | Supported | Supported | Supported | |
driver-action-recognition-adas-0002-encoder | Action Recognition Python* Demo | Supported | Supported | Supported | Supported |
emotions-recognition-retail-0003 | Interactive Face Detection Demo | Supported | Supported | Supported | Supported |
face-detection-adas-0001 | Interactive Face Detection Demo | Supported | Supported | Supported | Supported |
face-detection-retail-0004 | Interactive Face Detection Demo | Supported | Supported | Supported | Supported |
facial-landmarks-35-adas-0002 | Interactive Face Detection Demo | Supported | Supported | Supported | Supported |
facial-landmarks-35-adas-0002 | Gaze Estimation Demo | Supported | Supported | Supported | Supported |
gaze-estimation-adas-0002 | Gaze Estimation Demo | Supported | Supported | Supported | Supported |
handwritten-japanese-recognition-0001 | Handwritten Text Recognition Python* Demo | Supported | Supported | Supported | |
handwritten-simplified-chinese-recognition-0001 | Handwritten Text Recognition Python* Demo | Supported | Supported | Supported | |
head-pose-estimation-adas-0001 | Interactive Face Detection Demo | Supported | Supported | Supported | Supported |
head-pose-estimation-adas-0001 | Gaze Estimation Demo | Supported | Supported | Supported | Supported |
human-pose-estimation-0001 | Human Pose Estimation Demo Human Pose Estimation Python* Demo |
Supported | Supported | Supported | Supported |
human-pose-estimation-0005 | Human Pose Estimation Python* Demo | Supported | Supported | ||
human-pose-estimation-0006 | Human Pose Estimation Python* Demo | Supported | Supported | ||
human-pose-estimation-0007 | Human Pose Estimation Python* Demo | Supported | Supported | ||
human-pose-estimation-3d-0001 | 3D Human Pose Estimation Python* Demo | Supported | Supported | ||
image-retrieval-0001 | Image Retrieval Python* Demo | Supported | Supported | Supported | Supported |
instance-segmentation-security-0002 | Instance Segmentation Python* Demo | Supported | Supported | ||
instance-segmentation-security-0091 | Instance Segmentation Python* Demo | Supported | Supported | ||
instance-segmentation-security-0228 | Instance Segmentation Python* Demo | Supported | Supported | ||
instance-segmentation-security-1039 | Instance Segmentation Python* Demo | Supported | Supported | ||
instance-segmentation-security-1040 | Instance Segmentation Python* Demo | Supported | Supported | ||
landmarks-regression-retail-0009 | Smart Classroom Demo | Supported | Supported | Supported | Supported |
license-plate-recognition-barrier-0001 | Security Barrier Camera Demo | Supported | Supported | Supported | Supported |
pedestrian-and-vehicle-detector-adas-0001 | any demo that supports SSD*-based models | Supported | Supported | Supported | Supported |
pedestrian-detection-adas-0002 | any demo that supports SSD*-based models | Supported | Supported | Supported | Supported |
person-attributes-recognition-crossroad-0230 | Crossroad Camera Demo | Supported | Supported | Supported | Supported |
person-attributes-recognition-crossroad-0234 | Crossroad Camera Demo | Supported | Supported | Supported | |
person-attributes-recognition-crossroad-0238 | Crossroad Camera Demo | Supported | Supported | Supported | |
person-detection-retail-0002 | Pedestrian Tracker Demo | Supported | Supported | Supported | Supported |
person-detection-retail-0013 | Object Detection Demo | Supported | Supported | Supported | Supported |
person-reidentification-retail-0277 | Crossroad Camera Demo | Supported | Supported | ||
person-reidentification-retail-0286 | Crossroad Camera Demo Multi-Camera Multi-Target Tracking Demo |
Supported | Supported | ||
person-reidentification-retail-0287 | Crossroad Camera Demo Multi-Camera Multi-Target Tracking Demo |
Supported | Supported | ||
person-reidentification-retail-0288 | Crossroad Camera Demo Multi-Camera Multi-Target Tracking Demo |
Supported | Supported | ||
person-vehicle-bike-detection-crossroad-0078 | Crossroad Camera Demo | Supported | Supported | Supported | Supported |
person-vehicle-bike-detection-crossroad-1016 | Crossroad Camera Demo | Supported | Supported | Supported | |
person-vehicle-bike-detection-crossroad-yolov3-1020 | Object Detection Python* Demo | Supported | Supported | Supported | |
person-detection-action-recognition-0005 | Smart Classroom Demo | Supported | Supported | Supported | Supported |
person-detection-action-recognition-teacher-0002 | Smart Classroom Demo | Supported | Supported | Supported | Supported |
road-segmentation-adas-0001 | Segmentation Demo | Supported | Supported | Supported | Supported |
semantic-segmentation-adas-0001 | Image Segmentation Demo | Supported | Supported | Supported | Supported |
single-human-pose-estimation-0001 | Single Human Pose Estimation Python* Demo | Supported | Supported | Supported | |
single-image-super-resolution-1032 | Super Resolution Demo | Supported | Supported | Supported | Supported |
single-image-super-resolution-1033 | Super Resolution Demo | Supported | Supported | Supported | Supported |
text-detection-0003 | Text Detection Demo | Supported | Supported | Supported | Supported |
text-detection-0004 | Text Detection Demo | Supported | Supported | Supported | Supported |
text-recognition-0012 | Text Detection Demo | Supported | Supported | Supported | |
text-recognition-0013 | Text Detection Demo | Supported | Supported | Supported | |
vehicle-attributes-recognition-barrier-0039 | Security Barrier Camera Demo | Supported | Supported | Supported | Supported |
vehicle-attributes-recognition-barrier-0042 | Security Barrier Camera Demo | Supported | Supported | Supported | |
vehicle-license-plate-detection-barrier-0106 | Security Barrier Camera Demo | Supported | Supported | Supported | Supported |
vehicle-license-plate-detection-barrier-0123 | Security Barrier Camera Demo | Supported | Supported | Supported | Supported |
vehicle-detection-adas-0002 | any demo that supports SSD*-based models | Supported | Supported | Supported | Supported |
yolo-v2-tiny-vehicle-detection-0001 | Object Detection Python* Demo | Supported | Supported | Supported |
Notice that the FPGA support comes through a heterogeneous execution, for example, when the post-processing is happening on the CPU.
To be able to build demos you need to source InferenceEngine and OpenCV environment from a binary package which is available as proprietary distribution.
Please run the following command before the demos build (assuming that the binary package was installed to <INSTALL_DIR>
):
source <INSTALL_DIR>/deployment_tools/bin/setupvars.sh
You can also build demos manually using Inference Engine built from the openvino repo. In this case please set InferenceEngine_DIR
environment variable to a folder containing InferenceEngineConfig.cmake
and ngraph_DIR
to a folder containing ngraphConfig.cmake
in a build folder. Please also set the OpenCV_DIR
to point to the OpenCV package to use. The same OpenCV version should be used both for Inference Engine and demos build. Alternatively these values can be provided via command line while running cmake
. See CMake's search procedure.
Please refer to the Inference Engine build instructions
for details. Please also add path to built Inference Engine libraries to LD_LIBRARY_PATH
(Linux*) or PATH
(Windows*) variable before building the demos.
The officially supported Linux* build environment is the following:
- Ubuntu* 16.04 LTS 64-bit or CentOS* 7.4 64-bit
- GCC* 5.4.0 (for Ubuntu* 16.04) or GCC* 4.8.5 (for CentOS* 7.4)
- CMake* version 2.8 or higher.
To build the demo applications for Linux, go to the directory with the build_demos.sh
script and
run it:
build_demos.sh
You can also build the demo applications manually:
- Navigate to a directory that you have write access to and create a demos build directory. This example uses a directory named
build
:
mkdir build
- Go to the created directory:
cd build
- Run CMake to generate the Make files for release or debug configuration:
- For release configuration:
cmake -DCMAKE_BUILD_TYPE=Release <open_model_zoo>/demos
- For debug configuration:
cmake -DCMAKE_BUILD_TYPE=Debug <open_model_zoo>/demos
- Run
make
to build the demos:
make
For the release configuration, the demo application binaries are in <path_to_build_directory>/intel64/Release/
;
for the debug configuration — in <path_to_build_directory>/intel64/Debug/
.
The recommended Windows* build environment is the following:
- Microsoft Windows* 10
- Microsoft Visual Studio* 2015, 2017, or 2019
- CMake* version 2.8 or higher
NOTE: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
To build the demo applications for Windows, go to the directory with the build_demos_msvc.bat
batch file and run it:
build_demos_msvc.bat
By default, the script automatically detects the highest Microsoft Visual Studio version installed on the machine and uses it to create and build
a solution for a demo code. Optionally, you can also specify the preferred Microsoft Visual Studio version to be used by the script. Supported
versions are: VS2015
, VS2017
, VS2019
. For example, to build the demos using the Microsoft Visual Studio 2017, use the following command:
build_demos_msvc.bat VS2017
The demo applications binaries are in the C:\Users\<username>\Documents\Intel\OpenVINO\omz_demos_build_build\intel64\Release
directory.
You can also build a generated solution by yourself, for example, if you want to
build binaries in Debug configuration. Run the appropriate version of the
Microsoft Visual Studio and open the generated solution file from the C:\Users\<username>\Documents\Intel\OpenVINO\omz_demos_build\Demos.sln
directory.
Some of the Python demo applications require native Python extension modules to be built before they can be run.
This requires you to have Python development files (headers and import libraries) installed.
To build these modules, follow the instructions for building the demo applications above,
but add -DENABLE_PYTHON=ON
to either the cmake
or the build_demos*
command, depending on which you use.
For example:
cmake -DCMAKE_BUILD_TYPE=Release -DENABLE_PYTHON=ON <open_model_zoo>/demos
Before running compiled binary files, make sure your application can find the Inference Engine and OpenCV libraries.
If you use a proprietary distribution to build demos,
run the setupvars
script to set all necessary environment variables:
source <INSTALL_DIR>/bin/setupvars.sh
If you use your own Inference Engine and OpenCV binaries to build the demos please make sure you have added them
to the LD_LIBRARY_PATH
environment variable.
(Optional): The OpenVINO environment variables are removed when you close the shell. As an option, you can permanently set the environment variables as follows:
- Open the
.bashrc
file in<user_home_directory>
:
vi <user_home_directory>/.bashrc
- Add this line to the end of the file:
source <INSTALL_DIR>/bin/setupvars.sh
- Save and close the file: press the Esc key, type
:wq
and press the Enter key. - To test your change, open a new terminal. You will see
[setupvars.sh] OpenVINO environment initialized
.
To run Python demo applications that require native Python extension modules, you must additionally
set up the PYTHONPATH
environment variable as follows, where <bin_dir>
is the directory with
the built demo applications:
export PYTHONPATH="$PYTHONPATH:<bin_dir>/lib"
You are ready to run the demo applications. To learn about how to run a particular demo, read the demo documentation by clicking the demo name in the demo list above.
Before running compiled binary files, make sure your application can find the Inference Engine and OpenCV libraries.
Optionally download OpenCV community FFmpeg plugin. There is a downloader script in the OpenVINO package: <INSTALL_DIR>\opencv\ffmpeg-download.ps1
.
If you use a proprietary distribution to build demos,
run the setupvars
script to set all necessary environment variables:
<INSTALL_DIR>\bin\setupvars.bat
If you use your own Inference Engine and OpenCV binaries to build the demos please make sure you have added
to the PATH
environment variable.
To run Python demo applications that require native Python extension modules, you must additionally
set up the PYTHONPATH
environment variable as follows, where <bin_dir>
is the directory with
the built demo applications:
set PYTHONPATH=%PYTHONPATH%;<bin_dir>
To debug or run the demos on Windows in Microsoft Visual Studio, make sure you
have properly configured Debugging environment settings for the Debug
and Release configurations. Set correct paths to the OpenCV libraries, and
debug and release versions of the Inference Engine libraries.
For example, for the Debug configuration, go to the project's
Configuration Properties to the Debugging category and set the PATH
variable in the Environment field to the following:
PATH=<INSTALL_DIR>\deployment_tools\inference_engine\bin\intel64\Debug;<INSTALL_DIR>\opencv\bin;%PATH%
where <INSTALL_DIR>
is the directory in which the OpenVINO toolkit is installed.
You are ready to run the demo applications. To learn about how to run a particular demo, read the demo documentation by clicking the demo name in the demos list above.