Back | Next | Contents
Semantic Segmentation
Next we'll run realtime semantic segmentation on a live camera feed, available for C++ and Python:
segnet-camera.cpp
(C++)segnet-camera.py
(Python)
Similar to the previous segnet-console
example, these camera applications use segmentation networks, except that they process a live video feed instead. segnet-camera
accepts various optional command-line parameters, including:
--network
flag changes the segmentation model being used (see available networks)--alpha
flag sets the alpha blending value for the overlay (default is120
)--filter-mode
flag acceptspoint
orlinear
sampling (default islinear
)--camera
flag setting the camera device to use- MIPI CSI cameras are used by specifying the sensor index (
0
or1
, ect.) - V4L2 USB cameras are used by specifying their
/dev/video
node (/dev/video0
,/dev/video1
, ect.) - The default is to use MIPI CSI sensor 0 (
--camera=0
)
- MIPI CSI cameras are used by specifying the sensor index (
--width
and--height
flags setting the camera resolution (default is1280x720
)- The resolution should be set to a format that the camera supports.
- Query the available formats with the following commands:
$ sudo apt-get install v4l-utils $ v4l2-ctl --list-formats-ext
You can combine the usage of these flags as needed, and there are additional command line parameters available for loading custom models. Launch the application with the --help
flag to recieve more info, or see the Examples
readme.
Below are some typical scenarios for launching the program - see this table for the models available to use.
$ ./segnet-camera --network=fcn-resnet18-mhp # default MIPI CSI camera (1280x720)
$ ./segnet-camera --camera=/dev/video0 # V4L2 camera /dev/video0 (1280x720)
$ ./segnet-camera --width=640 --height=480 # default MIPI CSI camera (640x480)
$ ./segnet-camera.py --network=fcn-resnet18-mhp # default MIPI CSI camera (1280x720)
$ ./segnet-camera.py --camera=/dev/video0 # V4L2 camera /dev/video0 (1280x720)
$ ./segnet-camera.py --width=640 --height=480 # default MIPI CSI camera (640x480)
note: for example cameras to use, see these sections of the Jetson Wiki:
- Nano:https://eLinux.org/Jetson_Nano#Cameras
- Xavier:https://eLinux.org/Jetson_AGX_Xavier#Ecosystem_Products_.26_Cameras
- TX1/TX2: developer kits include an onboard MIPI CSI sensor module (0V5693)
Displayed in the OpenGL window are the live camera stream overlayed with the segmentation output, alongside the solid segmentation mask for clarity. Here are some examples of it being used with different models that are available to try:
# C++
$ ./segnet-camera --network=fcn-resnet18-mhp
# Python
$ ./segnet-camera.py --network=fcn-resnet18-mhp
# C++
$ ./segnet-camera --network=fcn-resnet18-sun
# Python
$ ./segnet-camera.py --network=fcn-resnet18-sun
# C++
$ ./segnet-camera --network=fcn-resnet18-deepscene
# Python
$ ./segnet-camera.py --network=fcn-resnet18-deepscene
Feel free to experiment with the different models and resolutions for indoor and outdoor environments. Next, we're going to introduce the concepts of Transfer Learning and train some example DNN models on our Jetson using PyTorch.
Next | Transfer Learning with PyTorch
Back | Segmenting Images from the Command Line
© 2016-2019 NVIDIA | Table of Contents