Skip to content

Files

Latest commit

 

History

History
1435 lines (1073 loc) · 97.7 KB

File metadata and controls

1435 lines (1073 loc) · 97.7 KB

OpenDR Perception Package

This package contains ROS nodes related to the perception package of OpenDR.


Prerequisites

Before you can run any of the package's ROS nodes, some prerequisites need to be fulfilled:

  1. First of all, you need to set up the required packages, build and source your workspace.

  2. Start roscore by running roscore &, if you haven't already done so.

  3. (Optional for nodes with RGB input)

    For basic usage and testing, all the toolkit's ROS nodes that use RGB images are set up to expect input from a basic webcam using the default package usb_cam, which is installed with the toolkit. You can run the webcam node in the terminal with the workspace sourced using:

    rosrun usb_cam usb_cam_node &

    By default, the USB cam node publishes images on /usb_cam/image_raw and the RGB input nodes subscribe to this topic if not provided with an input topic argument. As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, make sure to change the input topic accordingly.


Notes

  • Display output images with rqt_image_view

    For any node that outputs images, rqt_image_view can be used to display them by running the following command:

    rosrun rqt_image_view rqt_image_view &

    A window will appear, where the topic that you want to view can be selected from the drop-down menu on the top-left area of the window. Refer to each node's documentation below to find out the default output image topic, where applicable, and select it on the drop-down menu of rqt_image_view.

  • Echo node output

    All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command:

    rostopic echo /opendr/topic_name

    You can find out the default topic name for each node, in its documentation below.

  • Increase performance by disabling output

    Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing None in the corresponding output topic. This disables publishing on that topic, forgoing some operations in the node, which might increase its performance.

    An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations.

  • Logging the node performance in the console

    OpenDR provides the utility performance node to log performance messages in the console for the running node. You can set the performance_topic of the node you are using and also run the performance node to get the time it takes for the node to process a single input and its average speed expressed in frames per second.

  • An example diagram of OpenDR nodes running

    Pose Estimation ROS node running diagram

    • On the left, the usb_cam node can be seen, which is using a system camera to publish images on the /usb_cam/image_raw topic.
    • In the middle, OpenDR's pose estimation node is running taking as input the published image. By default, the node has its input topic set to /usb_cam/image_raw.
    • To the right the two output topics of the pose estimation node can be seen. The bottom topic /opendr/image_pose_annotated is the annotated image which can be easily viewed with rqt_image_view as explained earlier. The other topic /opendr/poses is the detection message which contains the detected poses' detailed information. This message can be easily viewed by running rostopic echo /opendr/poses in a terminal with the OpenDR ROS workspace sourced.

RGB input nodes

Pose Estimation ROS Node

You can find the pose estimation ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's pose estimation tool whose documentation can be found here. The node publishes the detected poses in OpenDR's 2D pose message format, which saves a list of OpenDR's keypoint message format.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the pose detection node:

    rosrun opendr_perception pose_estimation_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic name for output annotated RGB image, None to stop the node from publishing on this topic (default=/opendr/image_pose_annotated)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages, None to stop the node from publishing on this topic (default=/opendr/poses)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • --accelerate: acceleration flag that causes pose estimation to run faster but with less accuracy
  3. Default output topics:

    • Output images: /opendr/image_pose_annotated
    • Detection messages: /opendr/poses

    For viewing the output, refer to the notes above.

High Resolution Pose Estimation ROS Node

You can find the high resolution pose estimation ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's high resolution pose estimation tool whose documentation can be found here. The node publishes the detected poses in OpenDR's 2D pose message format, which saves a list of OpenDR's keypoint message format.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the high resolution pose detection node:

    rosrun opendr_perception hr_pose_estimation_node.py

    The following optional arguments are available:

    • -h, --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic name for output annotated RGB image, None to stop the node from publishing on this topic (default=/opendr/image_pose_annotated)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages, None to stop the node from publishing on this topic (default=/opendr/poses)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: Device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • --accelerate: Acceleration flag that causes pose estimation to run faster but with less accuracy
  3. Default output topics:

    • Output images: /opendr/image_pose_annotated
    • Detection messages: /opendr/poses

    For viewing the output, refer to the notes above.

Fall Detection ROS Node

You can find the fall detection ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's fall detection tool whose documentation can be found here. Fall detection is rule-based and works on top of pose estimation.

This node normally runs on detection mode where it subscribes to a topic of OpenDR poses and detects whether the poses are fallen persons or not. By providing an image topic the node runs on visualization mode. It also gets images, performs pose estimation internally and visualizes the output on an output image topic. Note that when providing an image topic the node has significantly worse performance in terms of speed, due to running pose estimation internally.

  • Instructions for basic usage in detection mode:

  1. Start the node responsible for publishing poses. Refer to the pose estimation node above.

  2. You are then ready to start the fall detection node:

    rosrun opendr_perception fall_detection_node.py

    The following optional arguments are available and relevant for running fall detection on pose messages only:

    • -h or --help: show a help message and exit
    • -ip or --input_pose_topic INPUT_POSE_TOPIC: topic name for input pose, None to stop the node from running detections on pose messages (default=/opendr/poses)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages (default=/opendr/fallen)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages, note that performance will be published to PERFORMANCE_TOPIC/fallen (default=None, disabled)
  3. Detections are published on the detections_topic

  • Instructions for visualization mode:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the fall detection node in visualization mode, which needs an input image topic to be provided:

    rosrun opendr_perception fall_detection_node.py -ii /usb_cam/image_raw

    The following optional arguments are available and relevant for running fall detection on images. Note that the input_rgb_image_topic is required for running in visualization mode:

    • -h or --help: show a help message and exit
    • -ii or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=None)
    • -o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic name for output annotated RGB image (default=/opendr/image_fallen_annotated)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages (default=/opendr/fallen)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages, note that performance will be published to PERFORMANCE_TOPIC/image (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • --accelerate: acceleration flag that causes pose estimation that runs internally to run faster but with less accuracy
  • Default output topics:

    • Detection messages: /opendr/fallen
    • Output images: /opendr/image_fallen_annotated

    For viewing the output, refer to the notes above.

Notes

Note that when the node runs on the default detection mode it is significantly faster than when it is provided with an input image topic. However, pose estimation needs to be performed externally on another node which publishes poses. When an input image topic is provided and the node runs in visualization mode, it runs pose estimation internally, and consequently it is recommended to only use it for testing purposes and not run other pose estimation nodes in parallel. The node can run in both modes in parallel or only on one of the two. To run the node only on visualization mode provide the argument -ip None to disable the detection mode. Detection messages on detections_topic are published in both modes.

Wave Detection ROS Node

You can find the wave detection ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node is based on a wave detection demo of the Lightweight OpenPose tool. Wave detection is rule-based and works on top of pose estimation.

This node normally runs on detection mode where it subscribes to a topic of OpenDR poses and detects whether the poses are waving or not. By providing an image topic the node runs on visualization mode. It also gets images, performs pose estimation internally and visualizes the output on an output image topic. Note that when providing an image topic the node has significantly worse performance in terms of speed, due to running pose estimation internally.

  • Instructions for basic usage in detection mode:

  1. Start the node responsible for publishing poses. Refer to the pose estimation node above.

  2. You are then ready to start the wave detection node:

    rosrun opendr_perception wave_detection_node.py

    The following optional arguments are available and relevant for running fall detection on pose messages only:

    • -h or --help: show a help message and exit
    • -ip or --input_pose_topic INPUT_POSE_TOPIC: topic name for input pose, None to stop the node from running detections on pose messages (default=/opendr/poses)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages (default=/opendr/wave)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages, note that performance will be published to PERFORMANCE_TOPIC/wave (default=None, disabled)
  3. Detections are published on the detections_topic

  • Instructions for visualization mode:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the wave detection node in visualization mode, which needs an input image topic to be provided:

    rosrun opendr_perception wave_detection_node.py -ii /usb_cam/image_raw

    The following optional arguments are available and relevant for running wave detection on images. Note that the input_rgb_image_topic is required for running in visualization mode:

    • -h or --help: show a help message and exit
    • -ii or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=None)
    • -o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic name for output annotated RGB image (default=/opendr/image_wave_annotated)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages (default=/opendr/wave)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages, note that performance will be published to PERFORMANCE_TOPIC/image (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • --accelerate: acceleration flag that causes pose estimation that runs internally to run faster but with less accuracy
  • Default output topics:

    • Detection messages: /opendr/wave
    • Output images: /opendr/image_wave_annotated

    For viewing the output, refer to the notes above.

Notes

Note that when the node runs on the default detection mode it is significantly faster than when it is provided with an input image topic. However, pose estimation needs to be performed externally on another node which publishes poses. When an input image topic is provided and the node runs in visualization mode, it runs pose estimation internally, and consequently it is recommended to only use it for testing purposes and not run other pose estimation nodes in parallel. The node can run in both modes in parallel or only on one of the two. To run the node only on visualization mode provide the argument -ip None to disable the detection mode. Detection messages on detections_topic are published in both modes.

Face Detection ROS Node

The face detection ROS node supports both the ResNet and MobileNet versions, the latter of which performs masked face detection as well.

You can find the face detection ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's face detection tool whose documentation can be found here.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the face detection node

    rosrun opendr_perception face_detection_retinaface_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic name for output annotated RGB image, None to stop the node from publishing on this topic (default=/opendr/image_faces_annotated)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages, None to stop the node from publishing on this topic (default=/opendr/faces)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • --backbone BACKBONE: retinaface backbone, options are either mnet or resnet, where mnet detects masked faces as well (default=resnet)
  3. Default output topics:

    • Output images: /opendr/image_faces_annotated
    • Detection messages: /opendr/faces

    For viewing the output, refer to the notes above.

Face Recognition ROS Node

You can find the face recognition ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's face recognition tool whose documentation can be found here.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the face recognition node:

    rosrun opendr_perception face_recognition_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic name for output annotated RGB image, None to stop the node from publishing on this topic (default=/opendr/image_face_reco_annotated)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages, None to stop the node from publishing on this topic (default=/opendr/face_recognition)
    • -id or --detections_id_topic DETECTIONS_ID_TOPIC: topic name for detection ID messages, None to stop the node from publishing on this topic (default=/opendr/face_recognition_id)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • --backbone BACKBONE: backbone network (default=mobilefacenet)
    • --dataset_path DATASET_PATH: path of the directory where the images of the faces to be recognized are stored (default=./database)
  3. Default output topics:

    • Output images: /opendr/image_face_reco_annotated
    • Detection messages: /opendr/face_recognition and /opendr/face_recognition_id

    For viewing the output, refer to the notes above.

Notes

Reference images should be placed in a defined structure like:

  • imgs
    • ID1
      • image1
      • image2
    • ID2
    • ID3
    • ...

The default dataset path is ./database. Please use the --database_path ./your/path/ argument to define a custom one. Τhe name of the sub-folder, e.g. ID1, will be published under /opendr/face_recognition_id.

The database entry and the returned confidence is published under the topic name /opendr/face_recognition, and the human-readable ID under /opendr/face_recognition_id.

Active Face Recognition ROS Node

You can find the active face recognition ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's face recognition tool whose documentation can be found here. The node will update existing features in the database when the system's confidence is low, and add new persons in database when a face is not found.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the active face recognition node:

    rosrun opendr_perception active_face_recognition_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic name for output annotated RGB image, None to stop the node from publishing on this topic (default=/opendr/image_face_reco_annotated)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages, None to stop the node from publishing on this topic (default=/opendr/face_recognition)
    • -id or --detections_id_topic DETECTIONS_ID_TOPIC: topic name for detection ID messages, None to stop the node from publishing on this topic (default=/opendr/face_recognition_id)
    • -new_id or --new_id_publisher NEW_ID_PUBLISHER: topic name for input String for new ID's, None to input new ID's in database as NewID_X (default=none)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • --backbone BACKBONE: backbone network (default=mobilefacenet)
    • --dataset_path DATASET_PATH: path of the directory where the images of the faces to be recognized are stored (default=./database)
  3. Default output topics:

    • Output images: /opendr/image_face_reco_annotated
    • Detection messages: /opendr/face_recognition and /opendr/face_recognition_id

    For viewing the output, refer to the notes above.

Notes

Reference images should be placed in a defined structure like:

  • imgs
    • ID1
      • image1
      • image2
    • ID2
    • ID3
    • ...

The default dataset path is ./database. Please use the --database_path ./your/path/ argument to define a custom one. Τhe name of the sub-folder, e.g. ID1, will be published under /opendr/face_recognition_id.

The database entry and the returned confidence is published under the topic name /opendr/face_recognition, and the human-readable ID under /opendr/face_recognition_id.

New ID's face images are saved in dataset path provided like:

  • imgs
    • NEW_ID1
      • image1
      • image2
    • NEW_ID2
    • ...

2D Object Detection ROS Nodes

For 2D object detection, there are several ROS nodes implemented using various algorithms. The generic object detectors are SSD, YOLOv3, YOLOv5, CenterNet, Nanodet and DETR.

You can find the 2D object detection ROS node python scripts here: SSD node, YOLOv3 node, YOLOv5 node, CenterNet node, Nanodet node and DETR node, where you can inspect the code and modify it as you wish to fit your needs. The nodes makes use of the toolkit's various 2D object detection tools: SSD tool, YOLOv3 tool, YOLOv5 tool, CenterNet tool, Nanodet tool, DETR tool, whose documentation can be found here: SSD docs, YOLOv3 docs, YOLOv5 docs, CenterNet docs, Nanodet docs, DETR docs.

Note that the semantic segmentation YOLOv8 node can also perform 2D object detection.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start a 2D object detector node:

    1. SSD node

      rosrun opendr_perception object_detection_2d_ssd_node.py

      The following optional arguments are available for the SSD node:

      • --backbone BACKBONE: Backbone network (default=vgg16_atrous)
      • --nms_type NMS_TYPE: Non-Maximum Suppression type options are default, seq2seq-nms, soft-nms, fast-nms, cluster-nms (default=default)
    2. YOLOv3 node

      rosrun opendr_perception object_detection_2d_yolov3_node.py

      The following optional argument is available for the YOLOv3 node:

      • --backbone BACKBONE: Backbone network (default=darknet53)
    3. YOLOv5 node

      rosrun opendr_perception object_detection_2d_yolov5_node.py

      The following optional argument is available for the YOLOv5 node:

      • --model_name MODEL_NAME: Network architecture, options are yolov5s, yolov5n, yolov5m, yolov5l, yolov5x, yolov5n6, yolov5s6, yolov5m6, yolov5l6, custom (default=yolov5s)
    4. CenterNet node

      rosrun opendr_perception object_detection_2d_centernet_node.py

      The following optional argument is available for the CenterNet node:

      • --backbone BACKBONE: Backbone network (default=resnet50_v1b)
    5. Nanodet node

      rosrun opendr_perception object_detection_2d_nanodet_node.py

      The following optional argument is available for the Nanodet node:

      • --model Model: Model that config file will be used (default=plus_m_1.5x_416)
    6. DETR node

      rosrun opendr_perception object_detection_2d_detr_node.py

    The following optional arguments are available for all nodes above:

    • -h or --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic name for output annotated RGB image, None to stop the node from publishing on this topic (default=/opendr/image_objects_annotated)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages, None to stop the node from publishing on this topic (default=/opendr/objects)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: Device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
  3. Default output topics:

    • Output images: /opendr/image_objects_annotated
    • Detection messages: /opendr/objects

    For viewing the output, refer to the notes above.

2D Single Object Tracking ROS Node

You can find the single object tracking 2D ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's single object tracking 2D SiamRPN tool whose documentation can be found here.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the single object tracking 2D node:

    rosrun opendr_perception object_tracking_2d_siamrpn_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC : listen to RGB images on this topic (default=/usb_cam/image_raw)
    • -o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic name for output annotated RGB image, None to stop the node from publishing on this topic (default=/opendr/image_tracking_annotated)
    • -t or --tracker_topic TRACKER_TOPIC: topic name for tracker messages, None to stop the node from publishing on this topic (default=/opendr/tracked_object)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: Device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
  3. Default output topics:

    • Output images: /opendr/image_tracking_annotated
    • Detection messages: /opendr/tracked_object

    For viewing the output, refer to the notes above.

Notes

To initialize this node it is required to provide a bounding box of an object to track. This is achieved by initializing one of the toolkit's 2D object detectors (YOLOv3) and running object detection once on the input. Afterwards, the detected bounding box that is closest to the center of the image is used to initialize the tracker. Feel free to modify the node to initialize it in a different way that matches your use case.

2D Object Tracking ROS Nodes

For 2D object tracking, there two ROS nodes provided, one using Deep Sort and one using FairMOT which use either pretrained models, or custom trained models. The predicted tracking annotations are split into two topics with detections and tracking IDs. Additionally, an annotated image is generated.

You can find the 2D object tracking ROS node python scripts here: Deep Sort node and FairMOT node where you can inspect the code and modify it as you wish to fit your needs. The nodes makes use of the toolkit's object tracking 2D - Deep Sort tool and object tracking 2D - FairMOT tool whose documentation can be found here: Deep Sort docs, FairMOT docs.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start a 2D object tracking node:

    1. Deep Sort node
      rosrun opendr_perception object_tracking_2d_deep_sort_node.py
      The following optional argument is available for the Deep Sort node:
      • -n --model_name MODEL_NAME: name of the trained model (default=deep_sort)
    2. FairMOT node
      rosrun opendr_perception object_tracking_2d_fair_mot_node.py
      The following optional argument is available for the FairMOT node:
      • -n --model_name MODEL_NAME: name of the trained model (default=fairmot_dla34)

    The following optional arguments are available for both nodes:

    • -h or --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic name for output annotated RGB image, None to stop the node from publishing on this topic (default=/opendr/image_objects_annotated)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages, None to stop the node from publishing on this topic (default=/opendr/objects)
    • -t or --tracking_id_topic TRACKING_ID_TOPIC: topic name for tracking ID messages, None to stop the node from publishing on this topic (default=/opendr/objects_tracking_id)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • -td --temp_dir TEMP_DIR: path to a temporary directory with models (default=temp)
  3. Default output topics:

    • Output images: /opendr/image_objects_annotated
    • Detection messages: /opendr/objects
    • Tracking ID messages: /opendr/objects_tracking_id

    For viewing the output, refer to the notes above.

Notes

An image dataset node is also provided to be used along these nodes. Make sure to change the default input topic of the tracking node if you are not using the USB cam node.

Vision Based Panoptic Segmentation ROS Node

A ROS node for performing panoptic segmentation on a specified RGB image stream using the EfficientPS network.

You can find the vision based panoptic segmentation (EfficientPS) ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's panoptic segmentation tool whose documentation can be found here and additional information about EfficientPS here.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the panoptic segmentation node:

    rosrun opendr_perception panoptic_segmentation_efficient_ps_node.py

    The following optional arguments are available:

    • -h, --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC : listen to RGB images on this topic (default=/usb_cam/image_raw)
    • --checkpoint CHECKPOINT : download pretrained models [cityscapes, kitti] or load from the provided path (default=cityscapes)
    • -oh or --output_heatmap_topic OUTPUT_RGB_IMAGE_TOPIC: publish the semantic and instance maps on this topic as OUTPUT_HEATMAP_TOPIC/semantic and OUTPUT_HEATMAP_TOPIC/instance (default=/opendr/panoptic)
    • -ov or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: publish the panoptic segmentation map as an RGB image on VISUALIZATION_TOPIC or a more detailed overview if using the --detailed_visualization flag (default=/opendr/panoptic/rgb_visualization)
    • --detailed_visualization: generate a combined overview of the input RGB image and the semantic, instance, and panoptic segmentation maps and publish it on OUTPUT_RGB_IMAGE_TOPIC (default=deactivated)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
  3. Default output topics:

    • Output images: /opendr/panoptic/semantic, /opendr/panoptic/instance, /opendr/panoptic/rgb_visualization
    • Detection messages: /opendr/panoptic/semantic, /opendr/panoptic/instance

Semantic Segmentation BiSeNet ROS Node

You can find the semantic segmentation BiSeNet ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's semantic segmentation BiSeNet tool whose documentation can be found here.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the semantic segmentation BiSeNet node:

    rosrun opendr_perception semantic_segmentation_bisenet_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_heatmap_topic OUTPUT_HEATMAP_TOPIC: topic to which we are publishing the heatmap in the form of a ROS image containing class IDs, None to stop the node from publishing on this topic (default=/opendr/heatmap)
    • -ov or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic to which we are publishing the heatmap image blended with the input image and a class legend for visualization purposes, None to stop the node from publishing on this topic (default=/opendr/heatmap_visualization)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
  3. Default output topics:

    • Output images: /opendr/heatmap, /opendr/heatmap_visualization
    • Detection messages: /opendr/heatmap

    For viewing the output, refer to the notes above.

Notes

On the table below you can find the detectable classes and their corresponding IDs:

Class Bicyclist Building Car Column Pole Fence Pedestrian Road Sidewalk Sign Symbol Sky Tree Unknown
ID 0 1 2 3 4 5 6 7 8 9 10 11

Semantic Segmentation YOLOv8 ROS Node

You can find the semantic segmentation YOLOv8 ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's semantic segmentation YOLOv8 tool whose documentation can be found here.

This node can perform both object detection 2D and semantic segmentation of the objects detected within the bounding boxes.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the semantic segmentation YOLOv8 node:

    rosrun opendr_perception semantic_segmentation_yolov8_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_heatmap_topic OUTPUT_HEATMAP_TOPIC: topic to which we are publishing the heatmap in the form of a ROS image containing class IDs, None to stop the node from publishing on this topic (default=/opendr/heatmap)
    • -ov or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic to which we are publishing the heatmap image blended with the input image for visualization purposes, None to stop the node from publishing on this topic (default=/opendr/heatmap_visualization)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for object detection/bounding box messages, None to stop the node from publishing on this topic (default=/opendr/objects)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • --model_name MODEL_NAME: Network architecture, can be one of yolov8n-seg, yolov8s-seg, yolov8m-seg, yolov8l-seg, yolov8n-segx, custom (default=yolov8s-seg)
  3. Default output topics:

    • Output images: /opendr/heatmap, /opendr/heatmap_visualization
    • Detection messages: /opendr/heatmap, /opendr/objects

    For viewing the output, refer to the notes above.

Notes

The detected classes can be found here.

Binary High Resolution ROS Node

You can find the binary high resolution ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's binary high resolution tool whose documentation can be found here.

Instructions for basic usage:

  1. Before running this node it is required to first train a model for a specific binary classification task. Refer to the tool's documentation for more information. To test the node out, run train_eval_demo.py to download the test dataset provided and to train a test model. You would then need to move the model folder in opendr_ws so the node can load it using the default model_path argument.

  2. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  3. You are then ready to start the binary high resolution node:

    ros2 run opendr_perception binary_high_resolution

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_heatmap_topic OUTPUT_HEATMAP_TOPIC: topic to which we are publishing the heatmap in the form of a ROS2 image containing class IDs, None to stop the node from publishing on this topic (default=/opendr/binary_hr_heatmap)
    • -ov or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic to which we are publishing the heatmap image blended with the input image and a class legend for visualization purposes, None to stop the node from publishing on this topic (default=/opendr/binary_hr_heatmap_visualization)
    • -m or --model_path MODEL_PATH: path to the directory of the trained model (default=test_model)
    • -a or --architecture ARCHITECTURE: architecture used for the trained model, either VGG_720p or VGG_1080p (default=VGG_720p)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
  4. Default output topics:

    • Output images: /opendr/binary_hr_heatmap, /opendr/binary_hr_heatmap_visualization
    • Detection messages: /opendr/binary_hr_heatmap

    For viewing the output, refer to the notes above.

Image-based Facial Emotion Estimation ROS Node

You can find the image-based facial emotion estimation ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's image-based facial emotion estimation tool which can be found here whose documentation can be found here.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the image-based facial emotion estimation node:

    rosrun opendr_perception facial_emotion_estimation_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic name for output annotated RGB image, None to stop the node from publishing on this topic (default=/opendr/image_emotion_estimation_annotated)
    • -e or --output_emotions_topic OUTPUT_EMOTIONS_TOPIC: topic to which we are publishing the facial emotion results, None to stop the node from publishing on this topic (default="/opendr/facial_emotion_estimation")
    • -m or --output_emotions_description_topic OUTPUT_EMOTIONS_DESCRIPTION_TOPIC: topic to which we are publishing the description of the estimated facial emotion, None to stop the node from publishing on this topic (default=/opendr/facial_emotion_estimation_description)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
  3. Default output topics:

    • Output images: /opendr/image_emotion_estimation_annotated
    • Detection messages: /opendr/facial_emotion_estimation, /opendr/facial_emotion_estimation_description

    For viewing the output, refer to the notes above.

Notes

This node requires the detection of a face first. This is achieved by including of the toolkit's face detector and running face detection on the input. Afterwards, the detected bounding box of the face is cropped and fed into the facial emotion estimator. Feel free to modify the node to detect faces in a different way that matches your use case.

Landmark-based Facial Expression Recognition ROS Node

A ROS node for performing landmark-based facial expression recognition using a trained model on AFEW, CK+ or Oulu-CASIA datasets. OpenDR does not include a pretrained model, so one should be provided by the user. An alternative would be to use the image-based facial expression estimation node provided by the toolkit.

You can find the landmark-based facial expression recognition ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's landmark-based facial expression recognition tool which can be found here whose documentation can be found here.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the landmark-based facial expression recognition node:

    rosrun opendr_perception landmark_based_facial_expression_recognition_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_category_topic OUTPUT_CATEGORY_TOPIC: topic to which we are publishing the recognized facial expression category info, None to stop the node from publishing on this topic (default="/opendr/landmark_expression_recognition")
    • -d or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC: topic to which we are publishing the description of the recognized facial expression, None to stop the node from publishing on this topic (default=/opendr/landmark_expression_recognition_description)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • --model: architecture to use for facial expression recognition, options are pstbln_ck+, pstbln_casia, pstbln_afew (default=pstbln_afew)
    • -s or --shape_predictor SHAPE_PREDICTOR: shape predictor (landmark_extractor) to use (default=./predictor_path)
  3. Default output topics:

    • Detection messages: /opendr/landmark_expression_recognition, /opendr/landmark_expression_recognition_description

    For viewing the output, refer to the notes above.

Skeleton-based Human Action Recognition ROS Nodes

A ROS node for performing skeleton-based human action recognition is provided, one using either ST-GCN or PST-GCN models pretrained on NTU-RGBD-60 dataset. Another ROS node for performing continual skeleton-based human action recognition is provided, using the CoSTGCN method. The human body poses of the image are first extracted by the lightweight OpenPose method which is implemented in the toolkit, and they are passed to the skeleton-based action recognition methods to be categorized.

You can find the skeleton-based human action recognition ROS node python script here and the continual skeleton-based human action recognition ROS node python script here to inspect the code and modify it as you wish to fit your needs. The latter makes use of the toolkit's skeleton-based human action recognition tool which can be found here for ST-GCN and here for PST-GCN and the former makes use of the toolkit's continual skeleton-based human action recognition tool which can be found here. Their documentation can be found here.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the skeleton-based human action recognition node:

    1. Skeleton-based action recognition node

      rosrun opendr_perception skeleton_based_action_recognition_node.py

      The following optional argument is available for the skeleton-based action recognition node:

      • --model MODEL: model to use, options are stgcn or pstgcn, (default=stgcn)
      • -c or --output_category_topic OUTPUT_CATEGORY_TOPIC: topic name for recognized action category, None to stop the node from publishing on this topic (default="/opendr/skeleton_recognized_action")
      • -d or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC: topic name for description of the recognized action category, None to stop the node from publishing on this topic (default=/opendr/skeleton_recognized_action_description)
    2. Continual skeleton-based action recognition node

      rosrun opendr_perception continual_skeleton_based_action_recognition_node.py

      The following optional argument is available for the continual skeleton-based action recognition node:

      • --model MODEL: model to use, options are costgcn, (default=costgcn)
      • -c or --output_category_topic OUTPUT_CATEGORY_TOPIC: topic name for recognized action category, None to stop the node from publishing on this topic (default="/opendr/continual_skeleton_recognized_action")
      • -d or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC: topic name for description of the recognized action category, None to stop the node from publishing on this topic (default=/opendr/continual_skeleton_recognized_action_description)

    The following optional arguments are available for all nodes:

    • -h or --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic name for output pose-annotated RGB image, None to stop the node from publishing on this topic (default=/opendr/image_pose_annotated)
    • -p or --pose_annotations_topic POSE_ANNOTATIONS_TOPIC: topic name for pose annotations, None to stop the node from publishing on this topic (default=/opendr/poses)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
  3. Default output topics:

    1. Skeleton-based action recognition node:

      • Detection messages: /opendr/skeleton_based_action_recognition, /opendr/skeleton_based_action_recognition_description, /opendr/poses
      • Output images: /opendr/image_pose_annotated
    2. Continual skeleton-based action recognition node:

      • Detection messages: /opendr/continual_skeleton_recognized_action, /opendr/continual_skeleton_recognized_action_description, /opendr/poses
      • Output images: /opendr/image_pose_annotated

      For viewing the output, refer to the notes above.

Video Human Activity Recognition ROS Node

A ROS node for performing human activity recognition using either CoX3D or X3D models pretrained on Kinetics400.

You can find the video human activity recognition ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's video human activity recognition tools which can be found here for CoX3D and here for X3D whose documentation can be found here.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. You are then ready to start the video human activity recognition node:

    rosrun opendr_perception video_activity_recognition_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_category_topic OUTPUT_CATEGORY_TOPIC: topic to which we are publishing the recognized activity, None to stop the node from publishing on this topic (default="/opendr/human_activity_recognition")
    • -od or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC: topic to which we are publishing the ID of the recognized action, None to stop the node from publishing on this topic (default=/opendr/human_activity_recognition_description)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --model: architecture to use for human activity recognition, options are cox3d-s, cox3d-m, cox3d-l, x3d-xs, x3d-s, x3d-m, or x3d-l (default=cox3d-m)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
  3. Default output topics:

    • Detection messages: /opendr/human_activity_recognition, /opendr/human_activity_recognition_description

    For viewing the output, refer to the notes above.

Notes

You can find the corresponding IDs regarding activity recognition here.

RGB Gesture Recognition ROS Node

For gesture recognition, the ROS node is based on the gesture recognition learner defined here, and the documentation of the learner can be found here.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. Start the gesture recognition node:

    rosrun opendr_perception gesture_recognition_node.py

    The following arguments are available:

    • -i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/usb_cam/image_raw)
    • -o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic name for output annotated RGB image (default=/opendr/image_gesture_annotated)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages (default=/opendr/gestures)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: Device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • --threshold THRESHOLD: Confidence threshold for predictions (default=0.5)
    • --model MODEL: Config file name of the model that will be used (default=plus_m_1.5x_416)
  3. Default output topics:

    • Output images: /opendr/image_gesture_annotated
    • Detection messages: /opendr/gestures

RGB + Infrared input

2D Object Detection GEM ROS Node

You can find the object detection 2D GEM ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's object detection 2D GEM tool whose documentation can be found here.

Instructions for basic usage:

  1. First one needs to find points in the color and infrared images that correspond, in order to find the homography matrix that allows to correct for the difference in perspective between the infrared and the RGB camera. These points can be selected using a utility tool that is provided in the toolkit.

  2. Pass the points you have found as pts_color and pts_infra arguments to the ROS GEM node.

  3. Start the node responsible for publishing images. If you have a RealSense camera, then you can use the corresponding node (assuming you have installed realsense2_camera):

    roslaunch realsense2_camera rs_camera.launch enable_color:=true enable_infra:=true enable_depth:=false enable_sync:=true infra_width:=640 infra_height:=480
  4. You are then ready to start the object detection 2d GEM node:

    rosrun opendr_perception object_detection_2d_gem_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -ic or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/camera/color/image_raw)
    • -ii or --input_infra_image_topic INPUT_INFRA_IMAGE_TOPIC: topic name for input infrared image (default=/camera/infra/image_raw)
    • -oc or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC: topic name for output annotated RGB image, None to stop the node from publishing on this topic (default=/opendr/rgb_image_objects_annotated)
    • -oi or --output_infra_image_topic OUTPUT_INFRA_IMAGE_TOPIC: topic name for output annotated infrared image, None to stop the node from publishing on this topic (default=/opendr/infra_image_objects_annotated)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages, None to stop the node from publishing on this topic (default=/opendr/objects)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
  5. Default output topics:

    • Output RGB images: /opendr/rgb_image_objects_annotated
    • Output infrared images: /opendr/infra_image_objects_annotated
    • Detection messages: /opendr/objects

    For viewing the output, refer to the notes above.


RGBD input

RGBD Hand Gesture Recognition ROS Node

A ROS node for performing hand gesture recognition using a MobileNetv2 model trained on HANDS dataset. The node has been tested with Kinectv2 for depth data acquisition with the following drivers: https://github.com/OpenKinect/libfreenect2 and https://github.com/code-iai/iai_kinect2.

You can find the RGBD hand gesture recognition ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's hand gesture recognition tool whose documentation can be found here.

Instructions for basic usage:

  1. Start the node responsible for publishing images from an RGBD camera. Remember to modify the input topics using the arguments in step 2 if needed.

  2. You are then ready to start the hand gesture recognition node:

    rosrun opendr_perception rgbd_hand_gesture_recognition_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -ic or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC: topic name for input RGB image (default=/kinect2/qhd/image_color_rect)
    • -id or --input_depth_image_topic INPUT_DEPTH_IMAGE_TOPIC: topic name for input depth image (default=/kinect2/qhd/image_depth_rect)
    • -o or --output_gestures_topic OUTPUT_GESTURES_TOPIC: topic name for predicted gesture class (default=/opendr/gestures)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
  3. Default output topics:

    • Detection messages:/opendr/gestures

    For viewing the output, refer to the notes above.


RGB + Audio input

Audiovisual Emotion Recognition ROS Node

You can find the audiovisual emotion recognition ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's audiovisual emotion recognition tool, whose documentation can be found here.

Instructions for basic usage:

  1. Start the node responsible for publishing images. If you have a USB camera, then you can use the usb_cam_node as explained in the prerequisites above.

  2. Start the node responsible for publishing audio. Remember to modify the input topics using the arguments in step 2 if needed.

  3. You are then ready to start the audiovisual emotion recognition node

    rosrun opendr_perception audiovisual_emotion_recognition_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -iv or --input_video_topic INPUT_VIDEO_TOPIC: topic name for input video, expects detected face of size 224x224 (default=/usb_cam/image_raw)
    • -ia or --input_audio_topic INPUT_AUDIO_TOPIC: topic name for input audio (default=/audio/audio)
    • -o or --output_emotions_topic OUTPUT_EMOTIONS_TOPIC: topic to which we are publishing the predicted emotion (default=/opendr/audiovisual_emotion)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --buffer_size BUFFER_SIZE: length of audio and video in seconds, (default=3.6)
    • --model_path MODEL_PATH: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server
  4. Default output topics:

    • Detection messages: /opendr/audiovisual_emotion

    For viewing the output, refer to the notes above.


RGB + IMU input

Continual SLAM ROS Nodes

A ROS node for performing depth+position output mapping based on visual + imu input. Continual SLAM involves the use of two distinct ROS nodes, one dedicated to performing inference and the other exclusively focused on training. Both of the nodes are based on the learner class defined in ContinualSLAMLearner.

You can find the continual slam ROS node python scripts here learner, predictor. You can further also find the RGB image + IMU publisher node here.

Instructions for basic usage:

  1. Download the KITTI Visual Odometry datased as it is described here.

  2. Decide on the frame rate FPS, then one can start the dataset publisher node using the following line:

    rosrun opendr_perception continual_slam_dataset_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • --dataset_path: path to the dataset (default=./kitti)
    • --config_file_path: path to the config file for learner class (default=src/opendr/perception/continual_slam/configs/singlegpu_kitti.yaml)
    • --output_image_topic OUTPUT_IMAGE_TOPIC: topic to which we are publishing the RGB image (default=/cl_slam/image)
    • --output_distance_topic OUTPUT_DISTANCE_TOPIC: topic to publish distances (default=/cl_slam/distance)
    • --dataset_fps FPS: frame rate which the dataset will be published, (default=3)
  3. Start the Predictor Node

    rosrun opendr_perception continual_slam_predictor_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -c or --config_path: path to the config file for the learner class (default=src/opendr/perception/continual_slam/configs/singlegpu_kitti.yaml)
    • -it or --input_image_topic: input image topic, listened from Continual SLAM Dataset Node (default=/cl_slam/image)
    • -dt or --input_distance_topic: input distance topic, listened from Continual SLAM Dataset Node (default=/cl_slam/distance)
    • -odt or --output_depth_topic: output depth topic, published to visual output tools (default=/opendr/predicted/image)
    • -opt or --output_pose_topic: output pose topic, published to visual output tools (default=/opendr/predicted/pose)
    • -ppcl or --publish_pointcloud: boolean to decide whether pointcloud output is asked or not (default=false)
    • -opct or --output_pointcloud_topic: output pointcloud topic, depending on --publish_pointcloud, published to visual output tools (default=/opendr/predicted/pointcloud)
    • -ut or --update_topic: update topic, listened from Continual SLAM Dataset Node (default=/cl_slam/update)
  4. Start the Learner Node (Optional)

    rosrun opendr_perception continual_slam_learner_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -c or --config_path: path to the config file for the learner class (default=src/opendr/perception/continual_slam/configs/singlegpu_kitti.yaml)
    • -it or --input_image_topic: input image topic, listened from Continual SLAM Dataset Node (default=/cl_slam/image)
    • -dt or --input_distance_topic: input distance topic, listened from Continual SLAM Dataset Node (default=/cl_slam/distance)
    • -ot or --output_weights_topic: output weights topic to be published to Continual SLAM Predictor Node (default=/cl_slam/update)
    • -pr or --publish_rate: publish rate of the weights (default=20)
    • -bs or --buffer_size: size of the replay buffer (default=10)
    • -ss or --sample_size: sample size of the replay buffer. If 0 is given, only online data is used (default=3)
    • -sm or --save_memory: whether to save memory or not. Add it to the command if you want to write to disk (default=True)

Audio input

Speech Command Recognition ROS Node

A ROS node for recognizing speech commands from an audio stream using MatchboxNet, EdgeSpeechNets or Quadratic SelfONN models, pretrained on the Google Speech Commands dataset.

You can find the speech command recognition ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's speech command recognition tools: EdgeSpeechNets tool, MatchboxNet tool, Quadratic SelfONN tool whose documentation can be found here: EdgeSpeechNet docs, MatchboxNet docs, Quadratic SelfONN docs.

Instructions for basic usage:

  1. Start the node responsible for publishing audio. Remember to modify the input topics using the arguments in step 2, if needed.

  2. You are then ready to start the speech command recognition node

    rosrun opendr_perception speech_command_recognition_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_audio_topic INPUT_AUDIO_TOPIC: topic name for input audio (default=/audio/audio)
    • -o or --output_speech_command_topic OUTPUT_SPEECH_COMMAND_TOPIC: topic name for speech command output (default=/opendr/speech_recognition)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --buffer_size BUFFER_SIZE: set the size of the audio buffer (expected command duration) in seconds (default=1.5)
    • --model MODEL: the model to use, choices are matchboxnet, edgespeechnets or quad_selfonn (default=matchboxnet)
    • --model_path MODEL_PATH: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server
  3. Default output topics:

    • Detection messages, class id and confidence: /opendr/speech_recognition

    For viewing the output, refer to the notes above.

Notes

EdgeSpeechNets currently does not have a pretrained model available for download, only local files may be used.

Speech Transcription ROS Node

A ROS node for speech transcription from an audio stream using Whisper or Vosk.

You can find the speech transcription ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's speech transcription tools: Whipser tool, Vosk tool whose documentation can be found here: Whisper docs, Vosk docs.

Instruction for basic usage:

  1. Start the node responsible for publishing audio, must be in wave format. For example

    roslaunch audio_capture capture_wave.launch

    Remember to modify the input topics using the arguments in step 2, if needed:

    roslaunch audio_play play.launch -t /audio/audio
  2. You are then ready to start the speech transcription node

    # Enable log to console.
    rosrun opendr_perception speech_transcription_node.py --verbose True
    # Use Whisper instead of Vosk and choose tiny.en variant.
    rosrun opendr_perception speech_transcription_node.py --backbone whisper --model_name tiny.en --verbose True
    # Suggest to Whisper that the speech will contain the name 'Felix'.
    rosrun opendr_perception speech_transcription_node.py --backbone whisper --model_name tiny.en --initial_prompt "Felix" --verbose True

    The following optional arguments are available (More in the source code):

    • -h or --help: show a help message and exit
    • -i or --input_audio_topic INPUT_AUDIO_TOPIC: topic name for input audio (default=/audio/audio)
    • -o or --output_speech_transcription_topic OUTPUT_TRANSCRIPTION_TOPIC: topic name for speech transcription output (default=/opendr/speech_transcription)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --backbone {vosk,whisper}: Backbone model for speech transcription
    • --model_name MODEL_NAME: Specific model name for each backbone. Example: 'tiny', 'tiny.en', 'base', 'base.en' for Whisper, 'vosk-model-small-en-us-0.15' for Vosk (default=None)
    • --model_path MODEL_PATH: Path to downloaded model files (default=None)
    • --language LANGUAGE: Whisper uses the language parameter to avoid language dectection. Vosk uses the langauge paremeter to select a specific model. Example: 'en' for Whisper, 'en-us' for Vosk (default=en-us). Check the available language codes for Whisper at Whipser repository. Check the available language code for Vosk from the Vosk model name at Vosk website.
    • --initial_prompt INITIAL_PROMPT: Prompt to provide some context or instruction for the transcription, only for Whisper
    • --verbose VERBOSE: Display transcription (default=False).
  3. Default output topics:

    • Speech transcription: /opendr/speech_transcription

    For viewing the output, refer to the notes above.


Text input

Intent Recognition ROS Node

A ROS node for recognizing intents from language. This node should be used together with the speech transcription node that would transcribe the speech into text and infer intent from it. The provided intent recognition node subscribes to the speech transcription output topic.

You can find the intent recognition ROS node python script here to inspect the code and modify if you wish for your needs. The node makes use of the toolkit's intent recognition learner, and the documentation can be found here.

Instructions for basic usage:

  1. Follow the instructions of the speech transcription node and start it.

  2. Start the intent recognition node

    rosrun opendr_perception intent_recognition_node.py

    The following arguments are available:

    • -i or --input_transcription_topic INPUT_TRANSCRIPTION_TOPIC: topic name for input transcription of type OpenDRTranscription (default=/opendr/speech_transcription)
    • -o or --output_intent_topic OUTPUT_INTENT_TOPIC: topic name for predicted intent (default=/opendr/intent)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to be used for inference (default=cuda)
    • --text_backbone TEXT_BACKBONE: text backbone tobe used, choices are bert-base-uncased, albert-base-v2, bert-small, bert-mini, bert-tiny (default=bert-base-uncased)
    • --cache_path CACHE_PATH: cache path for tokenizer files (default=./cache/)
  3. Default output topics:

    • Predicted intents and confidence: /opendr/intent

    For viewing the output, refer to the notes above.

Notes

On the table below you can find the detectable classes and their corresponding IDs:

Class Complain Praise Apologise Thank Criticize Agree Taunt Flaunt Joke Oppose Comfort Care Inform Advise Arrange Introduce Leave Prevent Greet Ask for help
ID 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

Point cloud input

3D Object Detection Voxel ROS Node

A ROS node for performing 3D object detection Voxel using PointPillars or TANet methods with either pretrained models on KITTI dataset, or custom trained models.

You can find the 3D object detection Voxel ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's 3D object detection Voxel tool whose documentation can be found here.

Instructions for basic usage:

  1. Start the node responsible for publishing point clouds. OpenDR provides a point cloud dataset node for convenience.

  2. You are then ready to start the 3D object detection node:

    rosrun opendr_perception object_detection_3d_voxel_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_point_cloud_topic INPUT_POINT_CLOUD_TOPIC: point cloud topic provided by either a point_cloud_dataset_node or any other 3D point cloud node (default=/opendr/dataset_point_cloud)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages (default=/opendr/objects3d)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • -n or --model_name MODEL_NAME: name of the trained model (default=tanet_car_xyres_16)
    • -c or --model_config_path MODEL_CONFIG_PATH: path to a model .proto config (default=../../src/opendr/perception/object_detection3d/voxel_object_detection_3d/second_detector/configs/tanet/car/xyres_16.proto)
  3. Default output topics:

    • Detection messages: /opendr/objects3d

    For viewing the output, refer to the notes above.

3D Object Tracking AB3DMOT ROS Node

A ROS node for performing 3D object tracking using AB3DMOT stateless method. This is a detection-based method, and therefore the 3D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids. The predicted tracking annotations are split into two topics with detections and tracking IDs.

You can find the 3D object tracking AB3DMOT ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's 3D object tracking AB3DMOT tool whose documentation can be found here.

Instructions for basic usage:

  1. Start the node responsible for publishing point clouds. OpenDR provides a point cloud dataset node for convenience.

  2. You are then ready to start the 3D object tracking node:

    rosrun opendr_perception object_tracking_3d_ab3dmot_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_point_cloud_topic INPUT_POINT_CLOUD_TOPIC: point cloud topic provided by either a point_cloud_dataset_node or any other 3D point cloud node (default=/opendr/dataset_point_cloud)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages, None to stop the node from publishing on this topic (default=/opendr/objects3d)
    • -t or --tracking3d_id_topic TRACKING3D_ID_TOPIC: topic name for output tracking IDs with the same element count as in detection topic, None to stop the node from publishing on this topic (default=/opendr/objects_tracking_id)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • -dn or --detector_model_name DETECTOR_MODEL_NAME: name of the trained model (default=tanet_car_xyres_16)
    • -dc or --detector_model_config_path DETECTOR_MODEL_CONFIG_PATH: path to a model .proto config (default=../../src/opendr/perception/object_detection3d/voxel_object_detection_3d/second_detector/configs/tanet/car/xyres_16.proto)
  3. Default output topics:

    • Detection messages: /opendr/objects3d
    • Tracking ID messages: /opendr/objects_tracking_id

    For viewing the output, refer to the notes above.

3D Object Tracking VPIT ROS Node

A ROS node for performing 3D single object tracking using VPIT method. This method need to be initialized with a 3D bounding box for an object that should be tracked. For this reasone, the initial detection3d box should be sent, as well as the corresponding point cloud. After the initialization, only point cloud data is required for inference. If a new object needs to be tracked, then the same input_detection3d_topic can be used to send a bounding box and the last send point cloud will be used for initialization. The predicted tracking annotations are split into two topics with detections and tracking IDs.

You can find the 3D object tracking VPIT ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's 3D object tracking VPIT tool whose documentation can be found here.

Instructions for basic usage:

  1. Start the node responsible for publishing point clouds. OpenDR provides a point cloud dataset node for convenience.

  2. Provide an initial bounding box from either a 3D detector, a dataset or a hand-crafted detection 3D box.

  3. You are then ready to start the 3D object tracking node:

    rosrun opendr_perception object_tracking_3d_vpit

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -ipc or --input_point_cloud_topic INPUT_POINT_CLOUD_TOPIC: point cloud topic provided by either a point_cloud_dataset_node or any other 3D point cloud node (default=/opendr/dataset_point_cloud)
    • -idet or --input_detection3d_topic INPUT_DETECTION3D_TOPIC: by either a 3D detector or a dataset (default=/opendr/dataset_detection3d)
    • -d or --detections_topic DETECTIONS_TOPIC: topic name for detection messages, None to stop the node from publishing on this topic (default=/opendr/objects3d)
    • -t or --tracking3d_id_topic TRACKING3D_ID_TOPIC: topic name for output tracking IDs with the same element count as in detection topic, None to stop the node from publishing on this topic (default=/opendr/objects_tracking_id)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • -bb or --backbone BACKBONE: Name of the backbone model (default=pp, choices=pp, spp, spps, tanet, stanet, stanets)
    • -mn or --model_name MODEL_NAME: Name of the trained model to load (default=vpit)
  4. Default output topics:

    • Detection messages: /opendr/objects3d
    • Tracking ID messages: /opendr/objects_tracking_id

    For viewing the output, refer to the notes above.

LiDAR Based Panoptic Segmentation ROS Node

A ROS node for performing panoptic segmentation on a specified pointcloud stream using the EfficientLPS network.

You can find the lidar based panoptic segmentation ROS node python script here. You can further also find the point cloud 2 publisher ROS node python script here, and more explanation here.You can inspect the codes and make changes as you wish to fit your needs. The EfficientLPS node makes use of the toolkit's panoptic segmentation tool whose documentation can be found here and additional information about EfficientLPS here.

Instructions for basic usage:

  1. First one needs to download SemanticKITTI dataset into POINTCLOUD_LOCATION as it is described in the Panoptic Segmentation Datasets. Then, once the SPLIT type is specified (train, test or "valid", default "valid"), the point Point Cloud 2 Publisher can be started using the following line:
  • rosrun opendr_perception point_cloud_2_publisher_node.py -d POINTCLOUD_LOCATION -s SPLIT
  1. After starting the PointCloud2 Publisher, one can start EfficientLPS Node using the following line:
  • rosrun opendr_perception panoptic_segmentation_efficient_lps_node.py /opendr/dataset_point_cloud2

    The following optional arguments are available:

    • -h, --help: show a help message and exit
    • -i or --input_point_cloud_2_topic INPUT_POINTCLOUD2_TOPIC : Point Cloud 2 topic provided by either a point_cloud_2_publisher_node or any other 3D Point Cloud 2 Node (default=/opendr/dataset_point_cloud2)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • -c or --checkpoint CHECKPOINT : download pretrained models [semantickitti] or load from the provided path (default=semantickitti)
    • -o or --output_heatmap_pointcloud_topic OUTPUT_HEATMAP_POINTCLOUD_TOPIC: publish the 3D heatmap pointcloud on OUTPUT_HEATMAP_POINTCLOUD_TOPIC (default=/opendr/panoptic)
  1. Default output topics:
    • Detection messages: /opendr/panoptic

Biosignal input

Heart Anomaly Detection ROS Node

A ROS node for performing heart anomaly (atrial fibrillation) detection from ECG data using GRU or ANBOF models trained on AF dataset.

You can find the heart anomaly detection ROS node python script here to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's heart anomaly detection tools: ANBOF tool and GRU tool, whose documentation can be found here: ANBOF docs and GRU docs.

Instructions for basic usage:

  1. Start the node responsible for publishing ECG data.

  2. You are then ready to start the heart anomaly detection node:

    rosrun opendr_perception heart_anomaly_detection_node.py

    The following optional arguments are available:

    • -h or --help: show a help message and exit
    • -i or --input_ecg_topic INPUT_ECG_TOPIC: topic name for input ECG data (default=/ecg/ecg)
    • -o or --output_heart_anomaly_topic OUTPUT_HEART_ANOMALY_TOPIC: topic name for heart anomaly detection (default=/opendr/heart_anomaly)
    • --performance_topic PERFORMANCE_TOPIC: topic name for performance messages (default=None, disabled)
    • --device DEVICE: device to use, either cpu or cuda, falls back to cpu if GPU or CUDA is not found (default=cuda)
    • --model MODEL: the model to use, choices are anbof or gru (default=anbof)
  3. Default output topics:

    • Detection messages: /opendr/heart_anomaly

    For viewing the output, refer to the notes above.


Dataset ROS Nodes

The dataset nodes can be used to publish data from the disk, which is useful to test the functionality without the use of a sensor. Dataset nodes use a provided DatasetIterator object that returns a (Data, Target) pair. If the type of the Data object is correct, the node will transform it into a corresponding ROS message object and publish it to a desired topic. The OpenDR toolkit currently provides two such nodes, an image dataset node and a point cloud dataset node.

Image Dataset ROS Node

The image dataset node downloads a nano_MOT20 dataset from OpenDR's FTP server and uses it to publish data to the ROS topic, which is intended to be used with the 2D object tracking nodes.

You can create an instance of this node with any DatasetIterator object that returns (Image, Target) as elements, to use alongside other nodes and datasets. You can inspect the node and modify it to your needs for other image datasets.

To get an image from a dataset on the disk, you can start a image_dataset.py node as:

rosrun opendr_perception image_dataset_node.py

The following optional arguments are available:

  • -h or --help: show a help message and exit
  • -o or --output_rgb_image_topic: topic name to publish the data (default=/opendr/dataset_image)
  • -f or --fps FPS: data fps (default=10)
  • -d or --dataset_path DATASET_PATH: path to a dataset (default=/MOT)
  • -ks or --mot20_subsets_path MOT20_SUBSETS_PATH: path to MOT20 subsets (default=../../src/opendr/perception/object_tracking_2d/datasets/splits/nano_mot20.train)

Point Cloud Dataset ROS Node

The point cloud dataset node downloads a nano_KITTI dataset from OpenDR's FTP server and uses it to publish data to the ROS topic, which is intended to be used with the 3D object detection node, as well as the 3D object tracking node.

You can create an instance of this node with any DatasetIterator object that returns (PointCloud, Target) as elements, to use alongside other nodes and datasets. You can inspect the node and modify it to your needs for other point cloud datasets.

To get a point cloud from a dataset on the disk, you can start a point_cloud_dataset.py node as:

rosrun opendr_perception point_cloud_dataset_node.py

The following optional arguments are available:

  • -h or --help: show a help message and exit
  • -o or --output_point_cloud_topic: topic name to publish the data (default=/opendr/dataset_point_cloud)
  • -f or --fps FPS: data fps (default=10)
  • -d or --dataset_path DATASET_PATH: path to a dataset, if it does not exist, nano KITTI dataset will be downloaded there (default=/KITTI/opendr_nano_kitti)
  • -ks or --kitti_subsets_path KITTI_SUBSETS_PATH: path to KITTI subsets, used only if a KITTI dataset is downloaded (default=../../src/opendr/perception/object_detection_3d/datasets/nano_kitti_subsets)

Point Cloud 2 Publisher ROS Node

The point cloud 2 dataset publisher, publishes point cloud 2 messages from pre-downloaded dataset SemanticKITTI. It is currently being used by the ROS node LiDAR Based Panoptic Segmentation ROS Node.

You can create an instance of this node with any DatasetIterator object that returns (PointCloud, Target) as elements, to use alongside other nodes and datasets. You can inspect the node and modify it to your needs for other point cloud datasets.

To get a point cloud from a dataset on the disk, you can start a point_cloud_2_publisher_node.py node as:

rosrun opendr_perception point_cloud_2_publisher_node.py

The following optional arguments are available:

  • -h or --help: show a help message and exit
  • -d or --dataset_path DATASET_PATH: path of the SemanticKITTI dataset to publish the point cloud 2 message (default=./datasets/semantickitti)
  • -s or --split SPLIT: split of the dataset to use, only (train, valid, test) are available (default=valid)
  • -o or --output_point_cloud_2_topic OUTPUT_POINT_CLOUD_2_TOPIC: topic name to publish the data (default=/opendr/dataset_point_cloud2)
  • -t or --test_data: Add this argument if you want to only test this node with the test data available in our server

Utility ROS Nodes

Performance ROS Node

The performance node is used to subscribe to the optional performance topic of a running node and log its performance in terms of the time it took to process a single input and produce output and in terms of frames per second. It uses a modifiable rolling window to calculate the average FPS.

You can inspect the node and modify it to your needs.

Instructions for basic usage:

  1. Start the node you want to benchmark as usual but also set the optional argument --performance_topic to, for example, /opendr/performance
  2. Start the performance node:
    rosrun opendr_perception performance_node.py
    The following optional arguments are available:
    • -h or --help: show a help message and exit
    • -i or --input_performance_topic INPUT_PERFORMANCE_TOPIC: topic name for input performance data (default=/opendr/performance)
    • -w or --window WINDOW: the window to use in number of frames to calculate the running average FPS (default=20)

Note that the input_performance_topic of the performance node must match the performance_topic of the running node. Also note that the running node should properly get input and produce output to publish performance messages for the performance node to use.