This is a follow-up on the OpenVino’s inference tutorials:
Version 2019 R1.0
Version 2018 R5.0
Version 2018 R4.0
We will work on and extend this tutorial as a demo app for smart cities, specifically for near misses detection.
Caution!
As of OpenVINO’s Release 2019 R1, the model’s binaries are not included in the toolkit, as they are part of the model zoo. You are supposed to download them manually as described in the tutorial. Be aware that if you choose to download them in a different path than the default, our scripts/setupenv.sh
will not fully work and you will have to add the path to the models yourself when running the program.
This project consists on showcasing the advantages of the Intel’s OpenVINO toolkit. We will develop a Near Misses case scenario, where we will detect vehicles and pedestrians and estimate a metric of a crossroad’s dangerousness. For that, we will use the OpenVINO toolkit and OpenCV, all written in C++
.
As mentioned previously, we will take the step 4 as a starting point, as it provides us with the options to run and stack different models synchronously or asynchronously. Once the proper models for vehicle and pedestrian detection are defined, we proceed to develop a feature to let the user define the different areas of interest (in this case, sidewalks and streets) that will help in identifying dangerous situations later on. Then, we will work on object tracking to recognize the same object across successive frames, giving us the ability to estimate trajectories, speeds and positions of the objects. Finally, we will have to define what are considered dangerous situations and check that the system detects them correctly.
To run the application in this tutorial, the OpenVINO™ toolkit and its dependencies must already be installed and verified using the included demos. Installation instructions may be found at: https://software.intel.com/en-us/articles/OpenVINO-Install-Linux
When needed, the following optional hardware can be used:
-
USB camera - Standard USB Video Class (UVC) camera.
-
Intel® Core™ CPU with integrated graphics.
-
VPU - USB Intel® Movidius™ Neural Compute Stick and what is being referred to as "Myriad"
A summary of what is needed:
-
Target and development platforms meeting the requirements described in the "System Requirements" section of the OpenVINO™ toolkit documentation which may be found at: https://software.intel.com/en-us/openvino-toolkit
Note: While writing this tutorial, an Intel® i7-8550U with Intel® HD graphics 520 GPU was used as both the development and target platform.
-
Optional:
-
Intel® Movidius™ Neural Compute Stick
-
USB UVC camera
-
Intel® Core™ CPU with integrated graphics
-
-
OpenVINO™ toolkit supported Linux operating system. This tutorial was run on 64-bit Ubuntu 16.04.1 LTS updated to kernel 4.15.0-43 following the OpenVINO™ toolkit installation instructions.
-
The latest OpenVINO™ toolkit installed and verified. Supported versions +2018 R4.0. (Lastest version supported 2019 R1.0.1)
-
Git(git) for downloading from the GitHub repository.
-
BOOST library. To install on Ubuntu, run:
apt-get install libboost-dev
apt-get install libboost-log-dev
By now you should have completed the Linux installation guide for the OpenVINO™ toolkit, however before continuing, please ensure:
-
That after installing the OpenVINO™ toolkit you have run the supplied demo samples
-
If you have and intend to use a GPU: You have installed and tested the GPU drivers
-
If you have and intend to use a USB camera: You have connected and tested the USB camera
-
If you have and intend to use a Myriad: You have connected and tested the USB Intel® Movidius™ Neural Compute Stick
-
That your development platform is connected to a network and has Internet access. To download all the files for this tutorial, you will need to access GitHub on the Internet.
1. Clone the repository at desired location:
git clone https://github.com/incluit/OpenVino-For-SmartCity.git
2. The first step is to configure the build environment for the OpenCV toolkit by sourcing the "setupvars.sh" script.
source /opt/intel/openvino/bin/setupvars.sh
For older versions than 2019 R1, OpenVINO was installed in a different dir, run this instead:
source /opt/intel/computer_vision_sdk/bin/setupvars.sh
3. Change to the top git repository:
cd OpenVino-For-SmartCity
4. OpenVINO’s Release R2020.1 or greater compatibility
If you are using OpenVINO’s Release R2020.1 or greater you don’t need download the models.
If you are using OpenVINO’s Release >= R2 and < R2020.1 you will need to execute the following script:
bash scripts/download_models.sh
In case of using the OpenVINO’s 2019 R1.0, before continuing, if you have not manually downloaded all the models before, it is necessary to download the vehicle-bike-detection-crossroad-0078 detection model.
cd /opt/intel/<openvino_path>/deployment_tools/tools/model_downloader/
sudo ./downloader.py --name person-vehicle-bike-detection-crossroad-0078
5. Create a directory to build the tutorial in and change to it.
mkdir build
cd build
6. Before running each of the following sections, be sure to source the helper script. That will make it easier to use environment variables instead of long names to the models:
source ../scripts/setupenv.sh
7. Compile:
cmake -DCMAKE_BUILD_TYPE=Release ../
make
1. First, let us see how it works on a single image file using default synchronous mode.
./intel64/Release/smart_city_tutorial -m_vp $vehicle232 -i ../data/car_1.bmp
2. For video files:
./intel64/Release/smart_city_tutorial -m_vp $vehicle232 -i ../data/video1_640x320.mp4
3. You can also run the command in asynchronous mode using the option "-n_async 2":
./intel64/Release/smart_city_tutorial -m_vp $vehicle232 -i ../data/NewVideo2.mp4 -n_async 2
4. You can also load the models into the GPU or MYRIAD:
Note: In order to run this section, the GPU and/or MYRIAD are required to be present and correctly configured.
./intel64/Release/smart_city_tutorial -m_vp $vehicle232 -d_vp GPU -i ../data/NewVideo2.mp4
./intel64/Release/smart_city_tutorial -m_vp $vehicle232 -d_vp MYRIAD -i ../data/NewVideo2.mp4
If you want to execute a detection scenario by using a web camera, we suggest you to perform it with the following command:
1) If using the native camera: /intel64/Release/smart_city_tutorial -m_vp $vehicle232 or ./intel64/Release/smart_city_tutorial -m_vp $vehicle232 -i cam
2) If using an usb camera: ./intel64/Release/smart_city_tutorial -m_vp $vehicle232 -i /dev/video1
You can also experiment by using different detection models, being the ones available up to now:
-
person-vehicle-bike-detection-crossroad-0078
-
-m_vp $vehicle2{16,32}
-
-
vehicle-detection-adas-0002 together with person-detection-retail-0013 or pedestrian-detection-adas-0002:
-
-m $mVDR{16,32}
and-m_p $person{1,2}{16,32}
-
-
frozen_yolo_v3
-
-m_y $yolo16
-
By default they will be loaded into the CPU, so remember to pass the corresponding argument:
-
-d_vp {CPU,GPU,MYRIAD}
-
-d {CPU,GPU,MYRIAD}
and-d_p {CPU,GPU,MYRIAD}
-
-d_y {CPU,GPU,MYRIAD}
The first 2 are included with the OpenVINO toolkit, while the last one is the compiled version of the public yolo general detection model. You can do this yourself by following this Intel’s guide or download our compiled binary and xml. You will need to move these files to the data
directory inside your OpenVino-For-SmartCity path.
To enable tracking you should run the command with the -tracking
argument:
./intel64/Release/smart_city_tutorial -m_vp $vehicle232 -d_vp GPU -i ../data/NewVideo2.mp4 -n_async 16 -tracking
To detect collisions you should run the command with the -tracking
+ -collision
arguments:
./intel64/Release/smart_city_tutorial -m_vp $vehicle232 -d_vp GPU -i ../data/video82.mp4 -n_async 16 -tracking -collision
An in-depth explanation can we found in the wiki.
Once we have the tracking working, we can retrieve for every object its actual and past positions. We average the last 5 frames to filter the noise of the detection models. Once we have the position, we can get the velocity as the difference of two consecutive positions. Again, we calculate the average of the last 5 velocities to avoid abrupt changes that do not match what actually happens in the scene. We also normalize this speed with a factor of 1/y (being y the vertical position of the object in the image). This has to be done because objects moving closer to the camera appear to be moving faster (in terms of pixels/frame) but that it is not true in reality. Once we get the velocities of each tracked object, we can calculate the acceleration analogously.
In the image above, we can assume that the white van goes at constant speed before getting to the intersection, we can see this plotted as a red line in the chart. Then, it tries to break (we can see a small down peak) and then the black car crashes into it, increasing its speed. The yellow line represents the acceleration of the van, we can clearly see that we could define a threshold on the acceleration to define a hard crash like this one.
We define near misses as an "extension" of collisions, in the way that we can identify thresholds in speed and acceleration and search for other vehicles in the neighborhood of the offender. We could then detect near misses. These situations and other dangerous ones are further explored in the wiki.
To draw areas of interest you should run the command with the -show_selection
argument.
./intel64/Release/smart_city_tutorial -m_vp $vehicle232 -d_vp GPU -i ../data/video82.mp4 -n_async 16 -show_selection -tracking -collision
One of the main benefits of defining areas of interest is that it will help us to optimize and focus the precision of the detection model into a specific space, reducing the margin of error (false positives, misdetections, etc). With these regions defined we could also trim or mask the original frames to reduce the computing times of the inference and further image processing.
The user can crop the image to a rectangle of their interest and draw on the first frame of the video the areas involved, being streets, sidewalks and crosswalks. In the particular case of a street, the user will have to type its orientation (east, west, north, south). Taking into account these criterias for defining AoIs, we are in a position to define rules for considering dangerous situations scenarios, i.e.: a car going from the street to the sidewalks or cars on a crosswalk while there are pedestrians on it. (In Progress)
Step by step instructions can be found on the wiki.
We integrated our program to the Intel® IoT DevCloud platform. This developer tool enabled us to run the inference proccess on different hardware targets. The following is the comparison graph where greater is better:
We notice that in order get a deeper understanding of the near miss identification, it was mandatory to view the progress of the variables metioned above (speed, acceleration). A real-time dashboard of collision and relevant events was develop as available feature as a response to this issue.
Step by step instructions to install and run the dashboard can be found on the wiki.
As we focus on delivered a POC that could be productized in the near future, we unterstood that integrate that a distributed application should have the possibility to compare two different intersection dangerouness. To address this, we developed a metric that weight each type of near miss and took into account the total ammount of detection on each street. This metric is evaluated once every 30 seconds and broadcasted throught MQTT to AWS IoT Core. To learn more about cloud integration using AWS, follow this link to our wiki.
Near misses has been optimized for having compatibility with OpenVINO’s releases 2018’s (R4, R5) and 2019’s (Lastest version tested 2019 R1.0.1).It is important for the user to be aware that some changes regarding detection models had been introduced between releases from 2018 and 2019. In first instance, 2019 releases do not have the detection model’s binaries included within the toolkit; the user will have to follow the instructions described in the Open Model Zoo link suggested at the “Foreword” section of this installation guide. Be aware that if you choose to download them in a different path than the default, our “scripts/setupenv.sh” will not fully work and you will have to add the path to the models yourself when running the program. In case of using the OpenVINO’s 2019 R1.0 or greater, before continuing, if you have not manually downloaded all the models before, it is necessary to download the vehicle-bike-detection-crossroad-0078 detection model.
cd /opt/intel/<openvino_path>/deployment_tools/tools/model_downloader/
sudo ./downloader.py --name person-vehicle-bike-detection-crossroad-0078
After, the user will be able to initiate the building process and to start using Near Misses.
Firstly, in order to successfully execute the building process, please make sure that all the declared prerequisites –hardware and software- has been met. In particular, regarding software prerequisites, is fundamental that the OpenVINO’s toolkit version had been downloaded by following the Intel’s intrucctions described in the following links:
-
Version 2019 R1.144 (latest) https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux.html
-
Version 2019 R1.01 https://docs.openvinotoolkit.org/2019_R1.01/_docs_install_guides_installing_openvino_linux.html
-
Version 2019 R1.0 https://docs.openvinotoolkit.org/2019_R1/_docs_install_guides_installing_openvino_linux.html
-
Version 2018 R5.0 https://docs.openvinotoolkit.org/2018_R5/_docs_install_guides_installing_openvino_linux.html
Secondly, make sure that “BOOST” library has been downloaded. If not, execute the following commands:
apt-get install libboost-dev apt-get install libboost-log-dev
In third place, it is fundamental for the building process to configure de build environment for the OpenCV toolkit by executing the following command:
2019 R1.X source /opt/intel/openvino/bin/setupvars.sh 2018 R4-R5 source /opt/intel/computer_vision_sdk/bin/setupvars.sh
Finally, before executing the compilation process be sure to source the helper script. That will make it easier to use environment variables instead of long names to the models: source ../scripts/setupenv.sh
In case of executing a Near Miss Detection Scenario by using a web camera, the user will be able to perform this through the following command:
If using the native camera:
/intel64/Release/smart_city_tutorial -m_vp $vehicle232 ./intel64/Release/smart_city_tutorial -m_vp $vehicle232 -i cam
If using an USB camera:
./intel64/Release/smart_city_tutorial -m_vp $vehicle232 -i /dev/video1
-
✓ Short README with usage examples
-
✓ Travis + Sonarcloud
-
✓ Include diagrams and images
-
✓ Elaborate on the wiki
-
✓ Try with different models
-
✓ Detect vehicles and pedestrians
-
✓ Draw Areas of Interest
-
✓ Object Tracking
-
✓ Object Trajectories
-
✓ Fix labels for the other models
-
✓ Calculate objects velocities
-
✓ Calculate objects accelerations
-
✓ Detect collisions
-
✓ Elaborate on dangerous situations to be detected (near misses)
-
✓ Detect these situations