From 22c4ef9ee75880a23c03a7c765f82c0a8c901f88 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Thu, 15 Sep 2022 18:46:50 +0300 Subject: [PATCH 01/57] Added prerequisites section for common prerequisites across nodes --- projects/opendr_ws/src/perception/README.md | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index b745abc35b..e980515363 100755 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -1,7 +1,23 @@ # Perception Package -This package contains ROS nodes related to perception package of OpenDR. +This package contains ROS nodes related to the perception package of OpenDR. +--- + +## Prerequisites + +---- + +Before you can run any of the toolkit's ROS nodes, some prerequisites need to be fulfilled: +1. First of all, you need to [setup the required packages and build your workspace.](../../README.md#Setup) +2. Start roscore by opening a new terminal where ROS is sourced properly (`source /opt/ros/noetic/setup.bash`) and run `roscore`. +3. For basic usage and testing, all the toolkit's ROS nodes that use RGB images are set up to expect input from a basic webcam using the default package `usb_cam` ([instructions to install](../../README.md#Setup)). You can run the webcam node in a new terminal inside `opendr_ws` and with the workspace sourced using: + ```shell + rosrun usb_cam usb_cam_node + ``` + By default, the usb cam node publishes images on `/usb_cam/image_raw` and most nodes also subscribe to this topic. As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.** + +---- ## Dataset ROS Nodes Assuming that you have already [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can start a dataset node to publish data from the disk, which is useful to test the functionality without the use of a sensor. From 5e9db801f47e1e91f89f6ba1fe7f79434ee55943 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Thu, 15 Sep 2022 18:49:01 +0300 Subject: [PATCH 02/57] Overhauled the dataset nodes section and added RGB nodes section --- projects/opendr_ws/src/perception/README.md | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index e980515363..62f14df3bb 100755 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -20,25 +20,32 @@ Before you can run any of the toolkit's ROS nodes, some prerequisites need to be ---- ## Dataset ROS Nodes -Assuming that you have already [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can start a dataset node to publish data from the disk, which is useful to test the functionality without the use of a sensor. -Dataset nodes take a `DatasetIterator` object that shoud returns a `(Data, Target)` pair elements. -If the type of the `Data` object is correct, the node will transform it into a corresponding ROS message object and publish it to a desired topic. +---- + +The dataset nodes can be used to publish data from the disk, which is useful to test the functionality without the use of a sensor. +Dataset nodes use a provided `DatasetIterator` object that returns a `(Data, Target)` pair. +If the type of the `Data` object is correct, the node will transform it into a corresponding ROS message object and publish it to a desired topic. ### Point Cloud Dataset ROS Node To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as: ```shell rosrun perception point_cloud_dataset.py ``` -By default, it downloads a `nano_KITTI` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements. +By default, it downloads a `nano_KITTI` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements. You can inspect [the node](./scripts/point_cloud_dataset.py) and modify it to your needs for other point cloud datasets. ### Image Dataset ROS Node To get an image from a dataset on the disk, you can start a `image_dataset.py` node as: ```shell rosrun perception image_dataset.py ``` -By default, it downloads a `nano_MOT20` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(Image, Target)` as elements. +By default, it downloads a `nano_MOT20` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(Image, Target)` as elements. You can inspect [the node](./scripts/image_dataset.py) and modify it to your needs for other image datasets. + +---- +## RGB input nodes + +---- -## Pose Estimation ROS Node +### Pose Estimation ROS Node Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can 1. Start the node responsible for publishing images. If you have a usb camera, then you can use the corresponding node (assuming you have installed the corresponding package): From c0684dc9cd81adc6ea04cce8b075a52bb51a5347 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Fri, 16 Sep 2022 14:57:31 +0300 Subject: [PATCH 03/57] Rearranged the listed node links --- projects/opendr_ws/README.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/projects/opendr_ws/README.md b/projects/opendr_ws/README.md index 2fabf14d5d..f763dbd349 100755 --- a/projects/opendr_ws/README.md +++ b/projects/opendr_ws/README.md @@ -43,22 +43,22 @@ Currently, apart from tools, opendr_ws contains the following ROS nodes (categor ## RGB input 1. [Pose Estimation](src/perception/README.md#pose-estimation-ros-node) 2. [Fall Detection](src/perception/README.md#fall-detection-ros-node) -3. [Face Recognition](src/perception/README.md#face-recognition-ros-node) -4. [2D Object Detection](src/perception/README.md#2d-object-detection-ros-nodes) -5. [Face Detection](src/perception/README.md#face-detection-ros-node) -6. [Panoptic Segmentation](src/perception/README.md#panoptic-segmentation-ros-node) -7. [Semantic Segmentation](src/perception/README.md#semantic-segmentation-ros-node) -8. [Video Human Activity Recognition](src/perception/README.md#human-action-recognition-ros-node) +3. [Face Detection](src/perception/README.md#face-detection-ros-node) +4. [Face Recognition](src/perception/README.md#face-recognition-ros-node) +5. [2D Object Detection](src/perception/README.md#2d-object-detection-ros-nodes) +6. [2D Object Tracking - Deep Sort](src/perception/README.md#deep-sort-object-tracking-2d-ros-node) +7. [Panoptic Segmentation](src/perception/README.md#panoptic-segmentation-ros-node) +8. [Semantic Segmentation](src/perception/README.md#semantic-segmentation-ros-node) 9. [Landmark-based Facial Expression Recognition](src/perception/README.md#landmark-based-facial-expression-recognition-ros-node) -10. [Deep Sort Object Tracking 2D](src/perception/README.md#deep-sort-object-tracking-2d-ros-node) -11. [Skeleton-based Human Action Recognition](src/perception/README.md#skeleton-based-human-action-recognition-ros-node) +10. [Skeleton-based Human Action Recognition](src/perception/README.md#skeleton-based-human-action-recognition-ros-node) +11. [Video Human Activity Recognition](src/perception/README.md#video-human-activity-recognition-ros-node) ## Point cloud input 1. [Voxel Object Detection 3D](src/perception/README.md#voxel-object-detection-3d-ros-node) 2. [AB3DMOT Object Tracking 3D](src/perception/README.md#ab3dmot-object-tracking-3d-ros-node) 3. [FairMOT Object Tracking 2D](src/perception/README.md#fairmot-object-tracking-2d-ros-node) ## RGB + Infrared input 1. [End-to-End Multi-Modal Object Detection (GEM)](src/perception/README.md#gem-ros-node) -## RGBD input nodes +## RGBD input 1. [RGBD Hand Gesture Recognition](src/perception/README.md#rgbd-hand-gesture-recognition-ros-node) ## Biosignal input 1. [Heart Anomaly Detection](src/perception/README.md#heart-anomaly-detection-ros-node) From 9f233465c1874719c0bea44f8e8c8295b4d6a2df Mon Sep 17 00:00:00 2001 From: tsampazk Date: Fri, 16 Sep 2022 14:57:47 +0300 Subject: [PATCH 04/57] General rearrangement and input sections --- projects/opendr_ws/src/perception/README.md | 182 +++++++++++--------- 1 file changed, 102 insertions(+), 80 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 62f14df3bb..ff9f0197da 100755 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -74,7 +74,7 @@ Note that to use the pose messages properly, you need to create an appropriate s print(opendr_pose['r_eye']) ``` -## Fall Detection ROS Node +### Fall Detection ROS Node Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can 1. Start the node responsible for publishing images. If you have a usb camera, then you can use the corresponding node (assuming you have installed the corresponding package): @@ -92,7 +92,16 @@ rosrun perception fall_detection.py 3. You can examine the annotated image stream using `rqt_image_view` (select the topic `/opendr/image_fall_annotated`) or `rostopic echo /opendr/falls`, where the node publishes bounding boxes of detected fallen poses -## Face Recognition ROS Node +### Face Detection ROS Node +A ROS node for the RetinaFace detector is implemented, supporting both the ResNet and MobileNet versions, the latter of +which performs mask recognition as well. After setting up the environment, the detector node can be initiated as: +```shell +rosrun perception face_detection_retinaface.py +``` +The annotated image stream is published under the topic name `/opendr/image_boxes_annotated`, and the bounding boxes alone +under `/opendr/faces`. + +### Face Recognition ROS Node Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can @@ -123,7 +132,7 @@ Reference images should be placed in a defined structure like: 4. The database entry and the returned confidence is published under the topic name `/opendr/face_recognition`, and the human-readable ID under `/opendr/face_recognition_id`. -## 2D Object Detection ROS Nodes +### 2D Object Detection ROS Nodes ROS nodes are implemented for the SSD, YOLOv3, CenterNet and DETR generic object detectors. Steps 1, 2 from above must run first. Then, to initiate the SSD detector node, run: @@ -146,44 +155,25 @@ rosrun perception object_detection_2d_detr.py ``` respectively. -## Face Detection ROS Node -A ROS node for the RetinaFace detector is implemented, supporting both the ResNet and MobileNet versions, the latter of -which performs mask recognition as well. After setting up the environment, the detector node can be initiated as: -```shell -rosrun perception face_detection_retinaface.py -``` -The annotated image stream is published under the topic name `/opendr/image_boxes_annotated`, and the bounding boxes alone -under `/opendr/faces`. - -## GEM ROS Node -Assuming that you have already [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can - +### Deep Sort Object Tracking 2D ROS Node -1. Add OpenDR to `PYTHONPATH` (please make sure you do not overwrite `PYTHONPATH` ), e.g., +A ROS node for performing Object Tracking 2D using Deep Sort using either pretrained models on Market1501 dataset, or custom trained models. This is a detection-based method, and therefore the 2D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids. The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) +Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: ```shell -export PYTHONPATH="/home/user/opendr/src:$PYTHONPATH" +rosrun perception object_tracking_2d_deep_sort.py ``` -2. First one needs to find points in the color and infrared images that correspond, in order to find the homography matrix that allows to correct for the difference in perspective between the infrared and the RGB camera. -These points can be selected using a [utility tool](../../../../src/opendr/perception/object_detection_2d/utils/get_color_infra_alignment.py) that is provided in the toolkit. - -3. Pass the points you have found as *pts_color* and *pts_infra* arguments to the ROS gem.py node. - -4. Start the node responsible for publishing images. If you have a RealSense camera, then you can use the corresponding node (assuming you have installed [realsense2_camera](http://wiki.ros.org/realsense2_camera)): - +To get images from usb_camera, you can start the camera node as: ```shell -roslaunch realsense2_camera rs_camera.launch enable_color:=true enable_infra:=true enable_depth:=false enable_sync:=true infra_width:=640 infra_height:=480 +rosrun usb_cam usb_cam_node ``` - -4. You are then ready to start the pose detection node - +The corresponding `input_image_topic` should be `/usb_cam/image_raw`. +If you want to use a dataset from the disk, you can start an `image_dataset.py` node as: ```shell -rosrun perception object_detection_2d_gem.py +rosrun perception image_dataset.py ``` +This will pulbish the dataset images to an `/opendr/dataset_image` topic by default, which means that the `input_image_topic` should be set to `/opendr/dataset_image`. -5. You can examine the annotated image stream using `rqt_image_view` (select one of the topics `/opendr/color_detection_annotated` or `/opendr/infra_detection_annotated`) or `rostopic echo /opendr/detections` - - -## Panoptic Segmentation ROS Node +### Panoptic Segmentation ROS Node A ROS node for performing panoptic segmentation on a specified RGB image stream using the [EfficientPS](../../../../src/opendr/perception/panoptic_segmentation/README.md) network. Assuming that the OpenDR catkin workspace has been sourced, the node can be started with: ```shell @@ -199,7 +189,7 @@ The following optional arguments are available: - `--detailed_visualization`: generate a combined overview of the input RGB image and the semantic, instance, and panoptic segmentation maps and publish it on `OUTPUT_RGB_IMAGE_TOPIC` (default=deactivated) -## Semantic Segmentation ROS Node +### Semantic Segmentation ROS Node A ROS node for performing semantic segmentation on an input image using the BiseNet model. Assuming that the OpenDR catkin workspace has been sourced, the node can be started with: ```shell @@ -210,23 +200,26 @@ Additionally, the following optional arguments are available: - `-h, --help`: show a help message and exit - `--heamap_topic HEATMAP_TOPIC`: publish the heatmap on `HEATMAP_TOPIC` -## RGBD Hand Gesture Recognition ROS Node +### Landmark-based Facial Expression Recognition ROS Node -A ROS node for performing hand gesture recognition using MobileNetv2 model trained on HANDS dataset. The node has been tested with Kinectv2 for depth data acquisition with the following drivers: https://github.com/OpenKinect/libfreenect2 and https://github.com/code-iai/iai_kinect2. Assuming that the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: +A ROS node for performing Landmark-based Facial Expression Recognition using the pretrained model PST-BLN on AFEW, CK+ or Oulu-CASIA datasets. +Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: ```shell -rosrun perception rgbd_hand_gesture_recognition.py +rosrun perception landmark_based_facial_expression_recognition.py ``` -The predictied classes are published to the topic `/opendr/gestures`. +The predictied class id and confidence is published under the topic name `/opendr/landmark_based_expression_recognition`, and the human-readable class name under `/opendr/landmark_based_expression_recognition_description`. -## Heart Anomaly Detection ROS Node +### Skeleton-based Human Action Recognition ROS Node -A ROS node for performing heart anomaly (atrial fibrillation) detection from ecg data using GRU or ANBOF models trained on AF dataset. Assuming that the OpenDR catkin workspace has been sourced, the node can be started as: +A ROS node for performing Skeleton-based Human Action Recognition using either ST-GCN or PST-GCN models pretrained on NTU-RGBD-60 dataset. The human body poses of the image are first extracted by the light-weight Openpose method which is implemented in the toolkit, and they are passed to the skeleton-based action recognition method to be categorized. +Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: ```shell -rosrun perception heart_anomaly_detection.py ECG_TOPIC MODEL +rosrun perception skeleton_based_action_recognition.py ``` -with `ECG_TOPIC` specifying the ROS topic to which the node will subscribe, and `MODEL` set to either *gru* or *anbof*. The predictied classes are published to the topic `/opendr/heartanomaly`. +The predictied class id and confidence is published under the topic name `/opendr/skeleton_based_action_recognition`, and the human-readable class name under `/opendr/skeleton_based_action_recognition_description`. +Besides, the annotated image is published in `/opendr/image_pose_annotated` as well as the corresponding poses in `/opendr/poses`. -## Human Action Recognition ROS Node +### Video Human Activity Recognition ROS Node A ROS node for performing Human Activity Recognition using either CoX3D or X3D models pretrained on Kinetics400. Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: @@ -235,41 +228,57 @@ rosrun perception video_activity_recognition.py ``` The predictied class id and confidence is published under the topic name `/opendr/human_activity_recognition`, and the human-readable class name under `/opendr/human_activity_recognition_description`. -## Landmark-based Facial Expression Recognition ROS Node +---- +## RGB + Infrared input -A ROS node for performing Landmark-based Facial Expression Recognition using the pretrained model PST-BLN on AFEW, CK+ or Oulu-CASIA datasets. -Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: +---- + +### GEM ROS Node +Assuming that you have already [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can + + +1. Add OpenDR to `PYTHONPATH` (please make sure you do not overwrite `PYTHONPATH` ), e.g., ```shell -rosrun perception landmark_based_facial_expression_recognition.py +export PYTHONPATH="/home/user/opendr/src:$PYTHONPATH" ``` -The predictied class id and confidence is published under the topic name `/opendr/landmark_based_expression_recognition`, and the human-readable class name under `/opendr/landmark_based_expression_recognition_description`. +2. First one needs to find points in the color and infrared images that correspond, in order to find the homography matrix that allows to correct for the difference in perspective between the infrared and the RGB camera. +These points can be selected using a [utility tool](../../../../src/opendr/perception/object_detection_2d/utils/get_color_infra_alignment.py) that is provided in the toolkit. -## Skeleton-based Human Action Recognition ROS Node +3. Pass the points you have found as *pts_color* and *pts_infra* arguments to the ROS gem.py node. + +4. Start the node responsible for publishing images. If you have a RealSense camera, then you can use the corresponding node (assuming you have installed [realsense2_camera](http://wiki.ros.org/realsense2_camera)): -A ROS node for performing Skeleton-based Human Action Recognition using either ST-GCN or PST-GCN models pretrained on NTU-RGBD-60 dataset. The human body poses of the image are first extracted by the light-weight Openpose method which is implemented in the toolkit, and they are passed to the skeleton-based action recognition method to be categorized. -Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: ```shell -rosrun perception skeleton_based_action_recognition.py +roslaunch realsense2_camera rs_camera.launch enable_color:=true enable_infra:=true enable_depth:=false enable_sync:=true infra_width:=640 infra_height:=480 ``` -The predictied class id and confidence is published under the topic name `/opendr/skeleton_based_action_recognition`, and the human-readable class name under `/opendr/skeleton_based_action_recognition_description`. -Besides, the annotated image is published in `/opendr/image_pose_annotated` as well as the corresponding poses in `/opendr/poses`. -## Speech Command Recognition ROS Node +4. You are then ready to start the pose detection node -A ROS node for recognizing speech commands from an audio stream using MatchboxNet, EdgeSpeechNets or Quadratic SelfONN models, pretrained on the Google Speech Commands dataset. -Assuming that the OpenDR catkin workspace has been sourced, the node can be started with: ```shell -rosrun perception speech_command_recognition.py INPUT_AUDIO_TOPIC +rosrun perception object_detection_2d_gem.py ``` -The following optional arguments are available: -- `--buffer_size BUFFER_SIZE`: set the size of the audio buffer (expected command duration) in seconds, default value **1.5** -- `--model MODEL`: choose the model to use: `matchboxnet` (default value), `edgespeechnets` or `quad_selfonn` -- `--model_path MODEL_PATH`: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server -The predictions (class id and confidence) are published to the topic `/opendr/speech_recognition`. -**Note:** EdgeSpeechNets currently does not have a pretrained model available for download, only local files may be used. +5. You can examine the annotated image stream using `rqt_image_view` (select one of the topics `/opendr/color_detection_annotated` or `/opendr/infra_detection_annotated`) or `rostopic echo /opendr/detections` + +---- +## RGBD input + +---- + +### RGBD Hand Gesture Recognition ROS Node -## Voxel Object Detection 3D ROS Node +A ROS node for performing hand gesture recognition using MobileNetv2 model trained on HANDS dataset. The node has been tested with Kinectv2 for depth data acquisition with the following drivers: https://github.com/OpenKinect/libfreenect2 and https://github.com/code-iai/iai_kinect2. Assuming that the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: +```shell +rosrun perception rgbd_hand_gesture_recognition.py +``` +The predictied classes are published to the topic `/opendr/gestures`. + +---- +## Point cloud input + +---- + +### Voxel Object Detection 3D ROS Node A ROS node for performing Object Detection 3D using PointPillars or TANet methods with either pretrained models on KITTI dataset, or custom trained models. The predicted detection annotations are pushed to `output_detection3d_topic` (default `output_detection3d_topic="/opendr/detection3d"`). @@ -284,7 +293,7 @@ rosrun perception point_cloud_dataset.py ``` This will pulbish the dataset point clouds to a `/opendr/dataset_point_cloud` topic by default, which means that the `input_point_cloud_topic` should be set to `/opendr/dataset_point_cloud`. -## AB3DMOT Object Tracking 3D ROS Node +### AB3DMOT Object Tracking 3D ROS Node A ROS node for performing Object Tracking 3D using AB3DMOT stateless method. This is a detection-based method, and therefore the 3D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids. @@ -300,8 +309,7 @@ rosrun perception point_cloud_dataset.py ``` This will pulbish the dataset point clouds to a `/opendr/dataset_point_cloud` topic by default, which means that the `input_point_cloud_topic` should be set to `/opendr/dataset_point_cloud`. - -## FairMOT Object Tracking 2D ROS Node +### FairMOT Object Tracking 2D ROS Node A ROS node for performing Object Tracking 2D using FairMOT with either pretrained models on MOT dataset, or custom trained models. The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: @@ -319,21 +327,35 @@ rosrun perception image_dataset.py ``` This will pulbish the dataset images to an `/opendr/dataset_image` topic by default, which means that the `input_image_topic` should be set to `/opendr/dataset_image`. -## Deep Sort Object Tracking 2D ROS Node +---- +## Biosignal input -A ROS node for performing Object Tracking 2D using Deep Sort using either pretrained models on Market1501 dataset, or custom trained models. This is a detection-based method, and therefore the 2D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids. The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) -Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: -```shell -rosrun perception object_tracking_2d_deep_sort.py -``` -To get images from usb_camera, you can start the camera node as: +---- + +### Heart Anomaly Detection ROS Node + +A ROS node for performing heart anomaly (atrial fibrillation) detection from ecg data using GRU or ANBOF models trained on AF dataset. Assuming that the OpenDR catkin workspace has been sourced, the node can be started as: ```shell -rosrun usb_cam usb_cam_node +rosrun perception heart_anomaly_detection.py ECG_TOPIC MODEL ``` -The corresponding `input_image_topic` should be `/usb_cam/image_raw`. -If you want to use a dataset from the disk, you can start an `image_dataset.py` node as: +with `ECG_TOPIC` specifying the ROS topic to which the node will subscribe, and `MODEL` set to either *gru* or *anbof*. The predictied classes are published to the topic `/opendr/heartanomaly`. + +---- +## Audio input + +---- + +### Speech Command Recognition ROS Node + +A ROS node for recognizing speech commands from an audio stream using MatchboxNet, EdgeSpeechNets or Quadratic SelfONN models, pretrained on the Google Speech Commands dataset. +Assuming that the OpenDR catkin workspace has been sourced, the node can be started with: ```shell -rosrun perception image_dataset.py +rosrun perception speech_command_recognition.py INPUT_AUDIO_TOPIC ``` -This will pulbish the dataset images to an `/opendr/dataset_image` topic by default, which means that the `input_image_topic` should be set to `/opendr/dataset_image`. +The following optional arguments are available: +- `--buffer_size BUFFER_SIZE`: set the size of the audio buffer (expected command duration) in seconds, default value **1.5** +- `--model MODEL`: choose the model to use: `matchboxnet` (default value), `edgespeechnets` or `quad_selfonn` +- `--model_path MODEL_PATH`: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server +The predictions (class id and confidence) are published to the topic `/opendr/speech_recognition`. +**Note:** EdgeSpeechNets currently does not have a pretrained model available for download, only local files may be used. From a18f4b5ff1463e0f95f48ece5b4ca95621ab7148 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Tue, 20 Sep 2022 15:29:34 +0300 Subject: [PATCH 05/57] Additional modifications and pose estimation section --- projects/opendr_ws/src/perception/README.md | 52 ++++++++++++--------- 1 file changed, 29 insertions(+), 23 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index ff9f0197da..8a363ac84e 100755 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -9,13 +9,15 @@ This package contains ROS nodes related to the perception package of OpenDR. ---- Before you can run any of the toolkit's ROS nodes, some prerequisites need to be fulfilled: -1. First of all, you need to [setup the required packages and build your workspace.](../../README.md#Setup) +1. First of all, you need to [set up the required packages and build your workspace.](../../README.md#Setup) 2. Start roscore by opening a new terminal where ROS is sourced properly (`source /opt/ros/noetic/setup.bash`) and run `roscore`. -3. For basic usage and testing, all the toolkit's ROS nodes that use RGB images are set up to expect input from a basic webcam using the default package `usb_cam` ([instructions to install](../../README.md#Setup)). You can run the webcam node in a new terminal inside `opendr_ws` and with the workspace sourced using: +3. _(Optional for nodes with [RGB input](#rgb-input-nodes))_ + + For basic usage and testing, all the toolkit's ROS nodes that use RGB images are set up to expect input from a basic webcam using the default package `usb_cam` ([instructions to install](../../README.md#Setup)). You can run the webcam node in a new terminal inside `opendr_ws` and with the workspace sourced using: ```shell rosrun usb_cam usb_cam_node ``` - By default, the usb cam node publishes images on `/usb_cam/image_raw` and most nodes also subscribe to this topic. As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.** + By default, the usb cam node publishes images on `/usb_cam/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.** ---- ## Dataset ROS Nodes @@ -26,13 +28,6 @@ The dataset nodes can be used to publish data from the disk, which is useful to Dataset nodes use a provided `DatasetIterator` object that returns a `(Data, Target)` pair. If the type of the `Data` object is correct, the node will transform it into a corresponding ROS message object and publish it to a desired topic. -### Point Cloud Dataset ROS Node -To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as: -```shell -rosrun perception point_cloud_dataset.py -``` -By default, it downloads a `nano_KITTI` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements. You can inspect [the node](./scripts/point_cloud_dataset.py) and modify it to your needs for other point cloud datasets. - ### Image Dataset ROS Node To get an image from a dataset on the disk, you can start a `image_dataset.py` node as: ```shell @@ -40,30 +35,41 @@ rosrun perception image_dataset.py ``` By default, it downloads a `nano_MOT20` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(Image, Target)` as elements. You can inspect [the node](./scripts/image_dataset.py) and modify it to your needs for other image datasets. +### Point Cloud Dataset ROS Node +To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as: +```shell +rosrun perception point_cloud_dataset.py +``` +By default, it downloads a `nano_KITTI` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements. You can inspect [the node](./scripts/point_cloud_dataset.py) and modify it to your needs for other point cloud datasets. + ---- ## RGB input nodes ---- ### Pose Estimation ROS Node -Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can -1. Start the node responsible for publishing images. If you have a usb camera, then you can use the corresponding node (assuming you have installed the corresponding package): +You can find the pose estimation ROS node python script [here](./scripts/pose_estimation.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [pose estimation tool](../../../../src/opendr/perception/pose_estimation/lightweight_open_pose/lightweight_open_pose_learner.py) whose documentation can be found [here.](../../../../docs/reference/lightweight-open-pose.md) -```shell -rosrun usb_cam usb_cam_node -``` +Instructions for basic usage and testing: -2. You are then ready to start the pose detection node (use `-h` to print out help for various arguments) +1. Start the node responsible for publishing images. If you have a usb camera, then you can use the `usb_cam_node` as explained in the [prerequisites above.](#prerequisites) -```shell -rosrun perception pose_estimation.py -``` +2. You are then ready to start the pose detection node: + ```shell + rosrun perception pose_estimation.py + ``` + The following optional arguments are available: + - `-h, --help`: show a help message and exit + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input rgb image (default=`/usb_cam/image_raw`) + - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated rgb image (default=`/opendr/image_pose_annotated`) + - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages (default=`/opendr/poses`) + - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy -3. You can examine the annotated image stream using `rqt_image_view` (select the topic `/opendr/image_pose_annotated`) or - `rostopic echo /opendr/poses`. +3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/image_pose_annotated` or by running `rostopic echo /opendr/poses`. -Note that to use the pose messages properly, you need to create an appropriate subscriber that will convert the ROS pose messages back to OpenDR poses which you can access as described in the [documentation](https://github.com/opendr-eu/opendr/blob/master/docs/reference/engine-target.md#posekeypoints-confidence): + ### Fall Detection ROS Node Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can From c5e403f3a7dc3a5074bd64e25272aff1ee213a2a Mon Sep 17 00:00:00 2001 From: tsampazk Date: Tue, 20 Sep 2022 15:52:30 +0300 Subject: [PATCH 06/57] Section renaming for consistency --- projects/opendr_ws/src/perception/README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 8a363ac84e..af36f4a775 100755 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -161,7 +161,7 @@ rosrun perception object_detection_2d_detr.py ``` respectively. -### Deep Sort Object Tracking 2D ROS Node +### 2D Object Tracking Deep Sort ROS Node A ROS node for performing Object Tracking 2D using Deep Sort using either pretrained models on Market1501 dataset, or custom trained models. This is a detection-based method, and therefore the 2D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids. The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: @@ -284,7 +284,7 @@ The predictied classes are published to the topic `/opendr/gestures`. ---- -### Voxel Object Detection 3D ROS Node +### 3D Object Detection Voxel ROS Node A ROS node for performing Object Detection 3D using PointPillars or TANet methods with either pretrained models on KITTI dataset, or custom trained models. The predicted detection annotations are pushed to `output_detection3d_topic` (default `output_detection3d_topic="/opendr/detection3d"`). @@ -299,7 +299,7 @@ rosrun perception point_cloud_dataset.py ``` This will pulbish the dataset point clouds to a `/opendr/dataset_point_cloud` topic by default, which means that the `input_point_cloud_topic` should be set to `/opendr/dataset_point_cloud`. -### AB3DMOT Object Tracking 3D ROS Node +### 3D Object Tracking AB3DMOT ROS Node A ROS node for performing Object Tracking 3D using AB3DMOT stateless method. This is a detection-based method, and therefore the 3D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids. @@ -315,7 +315,7 @@ rosrun perception point_cloud_dataset.py ``` This will pulbish the dataset point clouds to a `/opendr/dataset_point_cloud` topic by default, which means that the `input_point_cloud_topic` should be set to `/opendr/dataset_point_cloud`. -### FairMOT Object Tracking 2D ROS Node +### 2D Object Tracking FairMOT ROS Node A ROS node for performing Object Tracking 2D using FairMOT with either pretrained models on MOT dataset, or custom trained models. The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: From b267be08f24e8479314abb5faf96b4ce5b57bae6 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Tue, 20 Sep 2022 15:53:22 +0300 Subject: [PATCH 07/57] Some rearrangement in contents list to match the order --- projects/opendr_ws/README.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/projects/opendr_ws/README.md b/projects/opendr_ws/README.md index f763dbd349..80c03f5128 100755 --- a/projects/opendr_ws/README.md +++ b/projects/opendr_ws/README.md @@ -35,6 +35,7 @@ catkin_make ```shell source devel/setup.bash ``` + ## Structure Currently, apart from tools, opendr_ws contains the following ROS nodes (categorized according to the input they receive): @@ -46,20 +47,20 @@ Currently, apart from tools, opendr_ws contains the following ROS nodes (categor 3. [Face Detection](src/perception/README.md#face-detection-ros-node) 4. [Face Recognition](src/perception/README.md#face-recognition-ros-node) 5. [2D Object Detection](src/perception/README.md#2d-object-detection-ros-nodes) -6. [2D Object Tracking - Deep Sort](src/perception/README.md#deep-sort-object-tracking-2d-ros-node) +6. [2D Object Tracking - Deep Sort](src/perception/README.md#2d-object-tracking-deep-sort-ros-node) 7. [Panoptic Segmentation](src/perception/README.md#panoptic-segmentation-ros-node) 8. [Semantic Segmentation](src/perception/README.md#semantic-segmentation-ros-node) 9. [Landmark-based Facial Expression Recognition](src/perception/README.md#landmark-based-facial-expression-recognition-ros-node) 10. [Skeleton-based Human Action Recognition](src/perception/README.md#skeleton-based-human-action-recognition-ros-node) 11. [Video Human Activity Recognition](src/perception/README.md#video-human-activity-recognition-ros-node) -## Point cloud input -1. [Voxel Object Detection 3D](src/perception/README.md#voxel-object-detection-3d-ros-node) -2. [AB3DMOT Object Tracking 3D](src/perception/README.md#ab3dmot-object-tracking-3d-ros-node) -3. [FairMOT Object Tracking 2D](src/perception/README.md#fairmot-object-tracking-2d-ros-node) ## RGB + Infrared input 1. [End-to-End Multi-Modal Object Detection (GEM)](src/perception/README.md#gem-ros-node) ## RGBD input 1. [RGBD Hand Gesture Recognition](src/perception/README.md#rgbd-hand-gesture-recognition-ros-node) +## Point cloud input +1. [3D Object Detection Voxel](src/perception/README.md#3d-object-detection-voxel-ros-node) +2. [3D Object Tracking AB3DMOT](src/perception/README.md#3d-object-tracking-ab3dmot-ros-node) +3. [2D Object Tracking FairMOT](src/perception/README.md#2d-object-tracking-fairmot-ros-node) ## Biosignal input 1. [Heart Anomaly Detection](src/perception/README.md#heart-anomaly-detection-ros-node) ## Audio input From c3bf14cbaa8f2c5e43f5f1182173fb3182c1bd06 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Wed, 21 Sep 2022 15:28:31 +0300 Subject: [PATCH 08/57] Fall detection doc and moved dataset nodes to bottom --- projects/opendr_ws/src/perception/README.md | 93 +++++++++++++-------- 1 file changed, 56 insertions(+), 37 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index af36f4a775..69fb178d8a 100755 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -6,7 +6,7 @@ This package contains ROS nodes related to the perception package of OpenDR. ## Prerequisites ----- +--- Before you can run any of the toolkit's ROS nodes, some prerequisites need to be fulfilled: 1. First of all, you need to [set up the required packages and build your workspace.](../../README.md#Setup) @@ -17,30 +17,19 @@ Before you can run any of the toolkit's ROS nodes, some prerequisites need to be ```shell rosrun usb_cam usb_cam_node ``` - By default, the usb cam node publishes images on `/usb_cam/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.** + By default, the USB cam node publishes images on `/usb_cam/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.** ----- -## Dataset ROS Nodes +--- ----- +## Notes -The dataset nodes can be used to publish data from the disk, which is useful to test the functionality without the use of a sensor. -Dataset nodes use a provided `DatasetIterator` object that returns a `(Data, Target)` pair. -If the type of the `Data` object is correct, the node will transform it into a corresponding ROS message object and publish it to a desired topic. +--- -### Image Dataset ROS Node -To get an image from a dataset on the disk, you can start a `image_dataset.py` node as: -```shell -rosrun perception image_dataset.py -``` -By default, it downloads a `nano_MOT20` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(Image, Target)` as elements. You can inspect [the node](./scripts/image_dataset.py) and modify it to your needs for other image datasets. +- ### Increase performance by disabling output + Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing `None` in the corresponding output topic. This disables publishing on that topic, forgoing some operations in the node, which might increase its performance. -### Point Cloud Dataset ROS Node -To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as: -```shell -rosrun perception point_cloud_dataset.py -``` -By default, it downloads a `nano_KITTI` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements. You can inspect [the node](./scripts/point_cloud_dataset.py) and modify it to your needs for other point cloud datasets. + _An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations._ + ---- ## RGB input nodes @@ -49,11 +38,11 @@ By default, it downloads a `nano_KITTI` dataset from OpenDR's FTP server and use ### Pose Estimation ROS Node -You can find the pose estimation ROS node python script [here](./scripts/pose_estimation.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [pose estimation tool](../../../../src/opendr/perception/pose_estimation/lightweight_open_pose/lightweight_open_pose_learner.py) whose documentation can be found [here.](../../../../docs/reference/lightweight-open-pose.md) +You can find the pose estimation ROS node python script [here](./scripts/pose_estimation.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [pose estimation tool](../../../../src/opendr/perception/pose_estimation/lightweight_open_pose/lightweight_open_pose_learner.py) whose documentation can be found [here](../../../../docs/reference/lightweight-open-pose.md). -Instructions for basic usage and testing: +Instructions for basic usage and visualization of results: -1. Start the node responsible for publishing images. If you have a usb camera, then you can use the `usb_cam_node` as explained in the [prerequisites above.](#prerequisites) +1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). 2. You are then ready to start the pose detection node: ```shell @@ -62,12 +51,12 @@ Instructions for basic usage and testing: The following optional arguments are available: - `-h, --help`: show a help message and exit - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input rgb image (default=`/usb_cam/image_raw`) - - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated rgb image (default=`/opendr/image_pose_annotated`) - - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages (default=`/opendr/poses`) + - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated rgb image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`) + - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`) - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy -3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/image_pose_annotated` or by running `rostopic echo /opendr/poses`. +3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/image_pose_annotated` or by running `rostopic echo /opendr/poses`, where the node publishes the detected poses in [OpenDR's 2D pose message format](../ros_bridge/msg/OpenDRPose2D.msg). ### Fall Detection ROS Node -Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can -1. Start the node responsible for publishing images. If you have a usb camera, then you can use the corresponding node (assuming you have installed the corresponding package): +You can find the fall detection ROS node python script [here](./scripts/fall_detection.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [fall detection tool](../../../../src/opendr/perception/fall_detection/fall_detector_learner.py) whose documentation can be found [here](../../../../docs/reference/fall-detection.md). Fall detection uses the toolkit's pose estimation tool internally. -```shell -rosrun usb_cam usb_cam_node -``` + + +Instructions for basic usage and visualization of results: + +1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). 2. You are then ready to start the fall detection node -```shell -rosrun perception fall_detection.py -``` + ```shell + rosrun perception fall_detection.py + ``` + The following optional arguments are available: + - `-h, --help`: show a help message and exit + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input rgb image (default=`/usb_cam/image_raw`) + - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_fallen_annotated`) + - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/fallen`) + - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `--accelerate`: Acceleration flag that causes pose estimation that runs internally to run faster but with less accuracy -3. You can examine the annotated image stream using `rqt_image_view` (select the topic `/opendr/image_fall_annotated`) or - `rostopic echo /opendr/falls`, where the node publishes bounding boxes of detected fallen poses +3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/image_fall_annotated` or by running `rostopic echo /opendr/fallen`, where the node publishes bounding boxes of detected fallen poses. ### Face Detection ROS Node A ROS node for the RetinaFace detector is implemented, supporting both the ResNet and MobileNet versions, the latter of @@ -111,7 +107,7 @@ under `/opendr/faces`. Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can -1. Start the node responsible for publishing images. If you have a usb camera, then you can use the corresponding node (assuming you have installed the corresponding package): +1. Start the node responsible for publishing images. If you have a USB camera, then you can use the corresponding node (assuming you have installed the corresponding package): ```shell rosrun usb_cam usb_cam_node @@ -365,3 +361,26 @@ The following optional arguments are available: The predictions (class id and confidence) are published to the topic `/opendr/speech_recognition`. **Note:** EdgeSpeechNets currently does not have a pretrained model available for download, only local files may be used. + +---- +## Dataset ROS Nodes + +---- + +The dataset nodes can be used to publish data from the disk, which is useful to test the functionality without the use of a sensor. +Dataset nodes use a provided `DatasetIterator` object that returns a `(Data, Target)` pair. +If the type of the `Data` object is correct, the node will transform it into a corresponding ROS message object and publish it to a desired topic. + +### Image Dataset ROS Node +To get an image from a dataset on the disk, you can start a `image_dataset.py` node as: +```shell +rosrun perception image_dataset.py +``` +By default, it downloads a `nano_MOT20` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(Image, Target)` as elements. You can inspect [the node](./scripts/image_dataset.py) and modify it to your needs for other image datasets. + +### Point Cloud Dataset ROS Node +To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as: +```shell +rosrun perception point_cloud_dataset.py +``` +By default, it downloads a `nano_KITTI` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements. You can inspect [the node](./scripts/point_cloud_dataset.py) and modify it to your needs for other point cloud datasets. From 42d0f00dc8e090ad23fb249baab0cc1c54224a1d Mon Sep 17 00:00:00 2001 From: tsampazk Date: Wed, 21 Sep 2022 17:14:37 +0300 Subject: [PATCH 09/57] Face det, reco, 2d object detection overhaul and todo notes --- projects/opendr_ws/src/perception/README.md | 146 +++++++++++++------- 1 file changed, 99 insertions(+), 47 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 69fb178d8a..f1d10f29d2 100755 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -95,29 +95,55 @@ Instructions for basic usage and visualization of results: 3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/image_fall_annotated` or by running `rostopic echo /opendr/fallen`, where the node publishes bounding boxes of detected fallen poses. ### Face Detection ROS Node -A ROS node for the RetinaFace detector is implemented, supporting both the ResNet and MobileNet versions, the latter of -which performs mask recognition as well. After setting up the environment, the detector node can be initiated as: -```shell -rosrun perception face_detection_retinaface.py -``` -The annotated image stream is published under the topic name `/opendr/image_boxes_annotated`, and the bounding boxes alone -under `/opendr/faces`. + +The face detection ROS node supports both the ResNet and MobileNet versions, of latter of which performs mask recognition as well. + +You can find the face detection ROS node python script [here](./scripts/face_detection_retinaface.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [face detection tool](../../../../src/opendr/perception/object_detection_2d/retinaface/retinaface_learner.py) whose documentation can be found [here](../../../../docs/reference/face-detection-2d-retinaface.md). + +Instructions for basic usage and visualization of results: + +1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). + +2. You are then ready to start the face detection node + + ```shell + rosrun perception face_detection_retinaface.py + ``` + The following optional arguments are available: + - `-h, --help`: show a help message and exit + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input rgb image (default=`/usb_cam/image_raw`) + - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_faces_annotated`) + - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/faces`) + - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `--backbone BACKBONE`: Retinaface backbone, options are either 'mnet' or 'resnet', where 'mnet' detects masked faces as well (default=`resnet`) + +3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/image_faces_annotated` or by running `rostopic echo /opendr/faces`, where the node publishes bounding boxes of detected faces. ### Face Recognition ROS Node -Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can +You can find the face recognition ROS node python script [here](./scripts/face_recognition.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [face recognition tool](../../../../src/opendr/perception/face_recognition/face_recognition_learner.py) whose documentation can be found [here](../../../../docs/reference/face-recognition.md). -1. Start the node responsible for publishing images. If you have a USB camera, then you can use the corresponding node (assuming you have installed the corresponding package): +Instructions for basic usage and visualization of results: -```shell -rosrun usb_cam usb_cam_node -``` +1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). -2. You are then ready to start the face recognition node. Note that you should pass the folder containing the images of known faces as argument to create the corresponding database of known persons. +2. You are then ready to start the face recognition node + + ```shell + rosrun perception face_recognition.py + ``` + The following optional arguments are available: + - `-h, --help`: show a help message and exit + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input rgb image (default=`/usb_cam/image_raw`) + - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_face_reco_annotated`) + - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/face_recognition`) + - `-id or --detections_id_topic DETECTIONS_ID_TOPIC`: topic name for detection ID messages, `None` to stop the node from publishing on this topic (default=`/opendr/face_recognition_id`) + - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `--backbone BACKBONE`: Backbone network (default=`mobilefacenet`) + - `--dataset_path DATASET_PATH`: Path of the directory where the images of the faces to be recognized are stored (default=`./database`) + +3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/image_face_reco_annotated` or by running `rostopic echo /opendr/face_recognition`. -```shell -rosrun perception face_recognition.py _database_path:='./database' -``` **Notes** Reference images should be placed in a defined structure like: @@ -129,36 +155,61 @@ Reference images should be placed in a defined structure like: - ID3 - ... +The default dataset path is `./database`. Please use the `--database_path ./your/path/` argument to define a custom one. Τhe name of the sub-folder, e.g. ID1, will be published under `/opendr/face_recognition_id`. -4. The database entry and the returned confidence is published under the topic name `/opendr/face_recognition`, and the human-readable ID +The database entry and the returned confidence is published under the topic name `/opendr/face_recognition`, and the human-readable ID under `/opendr/face_recognition_id`. ### 2D Object Detection ROS Nodes -ROS nodes are implemented for the SSD, YOLOv3, CenterNet and DETR generic object detectors. Steps 1, 2 from above must run first. -Then, to initiate the SSD detector node, run: -```shell -rosrun perception object_detection_2d_ssd.py -``` -The annotated image stream can be viewed using `rqt_image_view`, and the default topic name is -`/opendr/image_boxes_annotated`. The bounding boxes alone are also published as `/opendr/objects`. -Similarly, the YOLOv3, CenterNet and DETR detector nodes can be run with: -```shell -rosrun perception object_detection_2d_yolov3.py -``` -or -```shell -rosrun perception object_detection_2d_centernet.py -``` -or -```shell -rosrun perception object_detection_2d_detr.py -``` -respectively. +For 2D object detection, there are several ROS nodes implemented using various algorithms. The generic obejct detectors are SSD, YOLOv3, CenterNet and DETR. -### 2D Object Tracking Deep Sort ROS Node +You can find the 2D object detection ROS node python scripts here: [SSD node](./scripts/object_detection_2d_ssd.py), [YOLOv3 node](./scripts/object_detection_2d_yolov3.py), [CenterNet node](./scripts/object_detection_2d_centernet.py) and [DETR node](./scripts/object_detection_2d_detr.py), where you can inspect the code and modify it as you wish to fit your needs. The nodes makes use of the toolkit's various 2D object detection tools: [SSD tool](../../../../src/opendr/perception/object_detection_2d/ssd/ssd_learner.py), [YOLOv3 tool](../../../../src/opendr/perception/object_detection_2d/yolov3/yolov3_learner.py), [CenterNet tool](../../../../src/opendr/perception/object_detection_2d/centernet/centernet_learner.py), [DETR tool](../../../../src/opendr/perception/object_detection_2d/detr/detr_learner.py), whose documentation can be found here: [SSD docs](../../../../docs/reference/object-detection-2d-ssd.md), [YOLOv3 docs](../../../../docs/reference/object-detection-2d-yolov3.md), [CenterNet docs](../../../../docs/reference/object-detection-2d-centernet.md), [DETR docs](../../../../docs/reference/detr.md). + +Instructions for basic usage and visualization of results: + +1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). +2. You are then ready to start a 2D object detector node: + 1. SSD node + ```shell + rosrun perception object_detection_2d_ssd.py + ``` + The following optional arguments are available for the SSD node: + - `--backbone BACKBONE`: Backbone network (default=`vgg16_atrous`) + - `--nms_type NMS_TYPE`: Non-Maximum Suppression type options are `default`, `seq2seq-nms`, `soft-nms`, `fast-nms`, `cluster-nms` (default=`default`) + + 2. YOLOv3 node + ```shell + rosrun perception object_detection_2d_yolov3.py + ``` + The following optional argument is available for the YOLOv3 node: + - `--backbone BACKBONE`: Backbone network (default=`darknet53`) + + 3. CenterNet node + ```shell + rosrun perception object_detection_2d_centernet.py + ``` + The following optional argument is available for the YOLOv3 node: + - `--backbone BACKBONE`: Backbone network (default=`resnet50_v1b`) + + 4. DETR node + ```shell + rosrun perception object_detection_2d_detr.py + ``` + + The following optional arguments are available for all nodes above: + - `-h, --help`: show a help message and exit + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input rgb image (default=`/usb_cam/image_raw`) + - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_objects_annotated`) + - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects`) + - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + +3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/image_objects_annotated` or by running `rostopic echo /opendr/objects`, where the bounding boxes alone are published. + +### 2D Object Tracking Deep Sort ROS Node + A ROS node for performing Object Tracking 2D using Deep Sort using either pretrained models on Market1501 dataset, or custom trained models. This is a detection-based method, and therefore the 2D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids. The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: ```shell @@ -203,7 +254,7 @@ Additionally, the following optional arguments are available: - `--heamap_topic HEATMAP_TOPIC`: publish the heatmap on `HEATMAP_TOPIC` ### Landmark-based Facial Expression Recognition ROS Node - + A ROS node for performing Landmark-based Facial Expression Recognition using the pretrained model PST-BLN on AFEW, CK+ or Oulu-CASIA datasets. Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: ```shell @@ -212,7 +263,7 @@ rosrun perception landmark_based_facial_expression_recognition.py The predictied class id and confidence is published under the topic name `/opendr/landmark_based_expression_recognition`, and the human-readable class name under `/opendr/landmark_based_expression_recognition_description`. ### Skeleton-based Human Action Recognition ROS Node - + A ROS node for performing Skeleton-based Human Action Recognition using either ST-GCN or PST-GCN models pretrained on NTU-RGBD-60 dataset. The human body poses of the image are first extracted by the light-weight Openpose method which is implemented in the toolkit, and they are passed to the skeleton-based action recognition method to be categorized. Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: ```shell @@ -222,7 +273,7 @@ The predictied class id and confidence is published under the topic name `/opend Besides, the annotated image is published in `/opendr/image_pose_annotated` as well as the corresponding poses in `/opendr/poses`. ### Video Human Activity Recognition ROS Node - + A ROS node for performing Human Activity Recognition using either CoX3D or X3D models pretrained on Kinetics400. Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: ```shell @@ -236,6 +287,7 @@ The predictied class id and confidence is published under the topic name `/opend ---- ### GEM ROS Node + Assuming that you have already [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can @@ -268,7 +320,7 @@ rosrun perception object_detection_2d_gem.py ---- ### RGBD Hand Gesture Recognition ROS Node - + A ROS node for performing hand gesture recognition using MobileNetv2 model trained on HANDS dataset. The node has been tested with Kinectv2 for depth data acquisition with the following drivers: https://github.com/OpenKinect/libfreenect2 and https://github.com/code-iai/iai_kinect2. Assuming that the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: ```shell rosrun perception rgbd_hand_gesture_recognition.py @@ -281,7 +333,7 @@ The predictied classes are published to the topic `/opendr/gestures`. ---- ### 3D Object Detection Voxel ROS Node - + A ROS node for performing Object Detection 3D using PointPillars or TANet methods with either pretrained models on KITTI dataset, or custom trained models. The predicted detection annotations are pushed to `output_detection3d_topic` (default `output_detection3d_topic="/opendr/detection3d"`). @@ -296,7 +348,7 @@ rosrun perception point_cloud_dataset.py This will pulbish the dataset point clouds to a `/opendr/dataset_point_cloud` topic by default, which means that the `input_point_cloud_topic` should be set to `/opendr/dataset_point_cloud`. ### 3D Object Tracking AB3DMOT ROS Node - + A ROS node for performing Object Tracking 3D using AB3DMOT stateless method. This is a detection-based method, and therefore the 3D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids. The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection3d"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking3d_id"`). @@ -312,7 +364,7 @@ rosrun perception point_cloud_dataset.py This will pulbish the dataset point clouds to a `/opendr/dataset_point_cloud` topic by default, which means that the `input_point_cloud_topic` should be set to `/opendr/dataset_point_cloud`. ### 2D Object Tracking FairMOT ROS Node - + A ROS node for performing Object Tracking 2D using FairMOT with either pretrained models on MOT dataset, or custom trained models. The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: ```shell @@ -335,7 +387,7 @@ This will pulbish the dataset images to an `/opendr/dataset_image` topic by defa ---- ### Heart Anomaly Detection ROS Node - + A ROS node for performing heart anomaly (atrial fibrillation) detection from ecg data using GRU or ANBOF models trained on AF dataset. Assuming that the OpenDR catkin workspace has been sourced, the node can be started as: ```shell rosrun perception heart_anomaly_detection.py ECG_TOPIC MODEL @@ -348,7 +400,7 @@ with `ECG_TOPIC` specifying the ROS topic to which the node will subscribe, and ---- ### Speech Command Recognition ROS Node - + A ROS node for recognizing speech commands from an audio stream using MatchboxNet, EdgeSpeechNets or Quadratic SelfONN models, pretrained on the Google Speech Commands dataset. Assuming that the OpenDR catkin workspace has been sourced, the node can be started with: ```shell From 12ab09cd1feea8b0063c8818478ac6497f66f2fc Mon Sep 17 00:00:00 2001 From: tsampazk Date: Thu, 22 Sep 2022 14:35:07 +0300 Subject: [PATCH 10/57] Added a class id table on sem segmentation doc --- docs/reference/semantic-segmentation.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/docs/reference/semantic-segmentation.md b/docs/reference/semantic-segmentation.md index 783b801810..9a0b0f2969 100644 --- a/docs/reference/semantic-segmentation.md +++ b/docs/reference/semantic-segmentation.md @@ -2,6 +2,11 @@ The *semantic segmentation* module contains the *BisenetLearner* class, which inherit from the abstract class *Learner*. +On the table below you can find the detectable classes and their corresponding IDs: + +| Class | Bicyclist | Building | Car | Column Pole | Fence | Pedestrian | Road | Sidewalk | Sign Symbol | Sky | Tree | Unknown | +|--------|-----------|----------|-----|-------------|-------|------------|------|----------|-------------|-----|------|---------| +| **ID** | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | ### Class BisenetLearner Bases: `engine.learners.Learner` From a02f6ab7e0e5db67d148012f3e9e96fabcbb2c37 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Thu, 22 Sep 2022 14:35:31 +0300 Subject: [PATCH 11/57] Panoptic and semantic segmentation overhaul --- projects/opendr_ws/src/perception/README.md | 79 ++++++++++++++------- 1 file changed, 53 insertions(+), 26 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index f1d10f29d2..92cd492d5d 100755 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -50,8 +50,8 @@ Instructions for basic usage and visualization of results: ``` The following optional arguments are available: - `-h, --help`: show a help message and exit - - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input rgb image (default=`/usb_cam/image_raw`) - - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated rgb image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`) + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) + - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`) - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`) - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy @@ -86,7 +86,7 @@ Instructions for basic usage and visualization of results: ``` The following optional arguments are available: - `-h, --help`: show a help message and exit - - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input rgb image (default=`/usb_cam/image_raw`) + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_fallen_annotated`) - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/fallen`) - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) @@ -111,7 +111,7 @@ Instructions for basic usage and visualization of results: ``` The following optional arguments are available: - `-h, --help`: show a help message and exit - - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input rgb image (default=`/usb_cam/image_raw`) + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_faces_annotated`) - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/faces`) - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) @@ -134,7 +134,7 @@ Instructions for basic usage and visualization of results: ``` The following optional arguments are available: - `-h, --help`: show a help message and exit - - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input rgb image (default=`/usb_cam/image_raw`) + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_face_reco_annotated`) - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/face_recognition`) - `-id or --detections_id_topic DETECTIONS_ID_TOPIC`: topic name for detection ID messages, `None` to stop the node from publishing on this topic (default=`/opendr/face_recognition_id`) @@ -201,7 +201,7 @@ Instructions for basic usage and visualization of results: The following optional arguments are available for all nodes above: - `-h, --help`: show a help message and exit - - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input rgb image (default=`/usb_cam/image_raw`) + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_objects_annotated`) - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects`) - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) @@ -227,31 +227,58 @@ rosrun perception image_dataset.py This will pulbish the dataset images to an `/opendr/dataset_image` topic by default, which means that the `input_image_topic` should be set to `/opendr/dataset_image`. ### Panoptic Segmentation ROS Node -A ROS node for performing panoptic segmentation on a specified RGB image stream using the [EfficientPS](../../../../src/opendr/perception/panoptic_segmentation/README.md) network. -Assuming that the OpenDR catkin workspace has been sourced, the node can be started with: -```shell -rosrun perception panoptic_segmentation_efficient_ps.py -``` -The following optional arguments are available: -- `-h, --help`: show a help message and exit -- `--input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC` : listen to RGB images on this topic (default=`/usb_cam/image_raw`) -- `--checkpoint CHECKPOINT` : download pretrained models [cityscapes, kitti] or load from the provided path (default=`cityscapes`) -- `--output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: publish the semantic and instance maps on this topic as `OUTPUT_HEATMAP_TOPIC/semantic` and `OUTPUT_HEATMAP_TOPIC/instance` (default=`/opendir/panoptic`) -- `--visualization_topic VISUALIZATION_TOPIC`: publish the panoptic segmentation map as an RGB image on `VISUALIZATION_TOPIC` or a more detailed overview if using the `--detailed_visualization` flag (default=`/opendr/panoptic/rgb_visualization`) -- `--detailed_visualization`: generate a combined overview of the input RGB image and the semantic, instance, and panoptic segmentation maps and publish it on `OUTPUT_RGB_IMAGE_TOPIC` (default=deactivated) +You can find the panoptic segmentation ROS node python script [here](./scripts/panoptic_segmentation_efficient_ps.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [panoptic segmentation tool](../../../../src/opendr/perception/panoptic_segmentation/efficient_ps/efficient_ps_learner.py) whose documentation can be found [here](../../../../docs/reference/efficient-ps.md) and additional information about Efficient PS [here](../../../../src/opendr/perception/panoptic_segmentation/README.md). +Instructions for basic usage and visualization of results: + +1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). + +2. You are then ready to start the face recognition node + + ```shell + rosrun perception panoptic_segmentation_efficient_ps.py + ``` + + The following optional arguments are available: + - `-h, --help`: show a help message and exit + - `--input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC` : listen to RGB images on this topic (default=`/usb_cam/image_raw`) + - `--checkpoint CHECKPOINT` : download pretrained models [cityscapes, kitti] or load from the provided path (default=`cityscapes`) + - `--output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: publish the semantic and instance maps on this topic as `OUTPUT_HEATMAP_TOPIC/semantic` and `OUTPUT_HEATMAP_TOPIC/instance`, `None` to stop the node from publishing on this topic (default=`/opendr/panoptic`) + - `--visualization_topic VISUALIZATION_TOPIC`: publish the panoptic segmentation map as an RGB image on `VISUALIZATION_TOPIC` or a more detailed overview if using the `--detailed_visualization` flag, `None` to stop the node from publishing on this topic (default=`/opendr/panoptic/rgb_visualization`) + - `--detailed_visualization`: generate a combined overview of the input RGB image and the semantic, instance, and panoptic segmentation maps and publish it on `OUTPUT_RGB_IMAGE_TOPIC` (default=deactivated) + +3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topics `/opendr/panoptic/semantic`, `/opendr/panoptic/instance` and `/opendr/panoptic/rgb_visualization` or by running `rostopic echo /opendr/panoptic/semantic`, `rostopic echo /opendr/panoptic/instance` and `rostopic echo /opendr/panoptic/rgb_visualization`. ### Semantic Segmentation ROS Node -A ROS node for performing semantic segmentation on an input image using the BiseNet model. -Assuming that the OpenDR catkin workspace has been sourced, the node can be started with: -```shell -rosrun perception semantic_segmentation_bisenet.py IMAGE_TOPIC -``` -Additionally, the following optional arguments are available: -- `-h, --help`: show a help message and exit -- `--heamap_topic HEATMAP_TOPIC`: publish the heatmap on `HEATMAP_TOPIC` +You can find the semantic segmentation ROS node python script [here](./scripts/semantic_segmentation_bisenet.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [semantic segmentation tool](../../../../src/opendr/perception/semantic_segmentation/bisenet/bisenet_learner.py) whose documentation can be found [here](../../../../docs/reference/semantic-segmentation.md). + +Instructions for basic usage and visualization of results: + +1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). + +2. You are then ready to start the face recognition node + + ```shell + rosrun perception semantic_segmentation_bisenet.py + ``` + The following optional arguments are available: + - `-h, --help`: show a help message and exit + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) + - `-o or --output_heatmap_topic OUTPUT_HEATMAP_TOPIC`: topic to which we are publishing the heatmap in the form of a ROS image containing class IDs, `None` to stop the node from publishing on this topic (default=`/opendr/heatmap`) + - `-v or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic to which we are publishing the heatmap image blended with the input image and a class legend for visualization purposes, `None` to stop the node from publishing on this topic (default=`/opendr/heatmap_visualization`) + - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + +3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/heatmap_visualization` or by running `rostopic echo /opendr/heatmap`. + +**Notes** + +On the table below you can find the detectable classes and their corresponding IDs: + +| Class | Bicyclist | Building | Car | Column Pole | Fence | Pedestrian | Road | Sidewalk | Sign Symbol | Sky | Tree | Unknown | +|--------|-----------|----------|-----|-------------|-------|------------|------|----------|-------------|-----|------|---------| +| **ID** | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | ### Landmark-based Facial Expression Recognition ROS Node From 6f3545915aec762a0e9c4330cb3970ed4676bca3 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Mon, 26 Sep 2022 12:43:40 +0300 Subject: [PATCH 12/57] Fix long lines as per suggestions --- projects/opendr_ws/src/perception/README.md | 30 +++++++++++++++------ 1 file changed, 22 insertions(+), 8 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 4040cc2608..206f7fdd01 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -38,7 +38,8 @@ Before you can run any of the toolkit's ROS nodes, some prerequisites need to be ### Pose Estimation ROS Node -You can find the pose estimation ROS node python script [here](./scripts/pose_estimation.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [pose estimation tool](../../../../src/opendr/perception/pose_estimation/lightweight_open_pose/lightweight_open_pose_learner.py) whose documentation can be found [here](../../../../docs/reference/lightweight-open-pose.md). +You can find the pose estimation ROS node python script [here](./scripts/pose_estimation.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's [pose estimation tool](../../../../src/opendr/perception/pose_estimation/lightweight_open_pose/lightweight_open_pose_learner.py) whose documentation can be found [here](../../../../docs/reference/lightweight-open-pose.md). Instructions for basic usage and visualization of results: @@ -71,7 +72,9 @@ Instructions for basic usage and visualization of results: ### Fall Detection ROS Node -You can find the fall detection ROS node python script [here](./scripts/fall_detection.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [fall detection tool](../../../../src/opendr/perception/fall_detection/fall_detector_learner.py) whose documentation can be found [here](../../../../docs/reference/fall-detection.md). Fall detection uses the toolkit's pose estimation tool internally. +You can find the fall detection ROS node python script [here](./scripts/fall_detection.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's [fall detection tool](../../../../src/opendr/perception/fall_detection/fall_detector_learner.py) whose documentation can be found [here](../../../../docs/reference/fall-detection.md). +Fall detection uses the toolkit's pose estimation tool internally. @@ -121,7 +124,8 @@ Instructions for basic usage and visualization of results: ### Face Recognition ROS Node -You can find the face recognition ROS node python script [here](./scripts/face_recognition.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [face recognition tool](../../../../src/opendr/perception/face_recognition/face_recognition_learner.py) whose documentation can be found [here](../../../../docs/reference/face-recognition.md). +You can find the face recognition ROS node python script [here](./scripts/face_recognition.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's [face recognition tool](../../../../src/opendr/perception/face_recognition/face_recognition_learner.py) whose documentation can be found [here](../../../../docs/reference/face-recognition.md). Instructions for basic usage and visualization of results: @@ -165,7 +169,8 @@ under `/opendr/face_recognition_id`. For 2D object detection, there are several ROS nodes implemented using various algorithms. The generic obejct detectors are SSD, YOLOv3, CenterNet and DETR. -You can find the 2D object detection ROS node python scripts here: [SSD node](./scripts/object_detection_2d_ssd.py), [YOLOv3 node](./scripts/object_detection_2d_yolov3.py), [CenterNet node](./scripts/object_detection_2d_centernet.py) and [DETR node](./scripts/object_detection_2d_detr.py), where you can inspect the code and modify it as you wish to fit your needs. The nodes makes use of the toolkit's various 2D object detection tools: [SSD tool](../../../../src/opendr/perception/object_detection_2d/ssd/ssd_learner.py), [YOLOv3 tool](../../../../src/opendr/perception/object_detection_2d/yolov3/yolov3_learner.py), [CenterNet tool](../../../../src/opendr/perception/object_detection_2d/centernet/centernet_learner.py), [DETR tool](../../../../src/opendr/perception/object_detection_2d/detr/detr_learner.py), whose documentation can be found here: [SSD docs](../../../../docs/reference/object-detection-2d-ssd.md), [YOLOv3 docs](../../../../docs/reference/object-detection-2d-yolov3.md), [CenterNet docs](../../../../docs/reference/object-detection-2d-centernet.md), [DETR docs](../../../../docs/reference/detr.md). +You can find the 2D object detection ROS node python scripts here: [SSD node](./scripts/object_detection_2d_ssd.py), [YOLOv3 node](./scripts/object_detection_2d_yolov3.py), [CenterNet node](./scripts/object_detection_2d_centernet.py) and [DETR node](./scripts/object_detection_2d_detr.py), where you can inspect the code and modify it as you wish to fit your needs. +The nodes makes use of the toolkit's various 2D object detection tools: [SSD tool](../../../../src/opendr/perception/object_detection_2d/ssd/ssd_learner.py), [YOLOv3 tool](../../../../src/opendr/perception/object_detection_2d/yolov3/yolov3_learner.py), [CenterNet tool](../../../../src/opendr/perception/object_detection_2d/centernet/centernet_learner.py), [DETR tool](../../../../src/opendr/perception/object_detection_2d/detr/detr_learner.py), whose documentation can be found here: [SSD docs](../../../../docs/reference/object-detection-2d-ssd.md), [YOLOv3 docs](../../../../docs/reference/object-detection-2d-yolov3.md), [CenterNet docs](../../../../docs/reference/object-detection-2d-centernet.md), [DETR docs](../../../../docs/reference/detr.md). Instructions for basic usage and visualization of results: @@ -210,7 +215,10 @@ Instructions for basic usage and visualization of results: ### 2D Object Tracking Deep Sort ROS Node -A ROS node for performing Object Tracking 2D using Deep Sort using either pretrained models on Market1501 dataset, or custom trained models. This is a detection-based method, and therefore the 2D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids. The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) +A ROS node for performing Object Tracking 2D using Deep Sort using either pretrained models on Market1501 dataset, or custom trained models. +This is a detection-based method, and therefore the 2D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids. +The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). +Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: ```shell @@ -351,7 +359,9 @@ rosrun perception object_detection_2d_gem.py ### RGBD Hand Gesture Recognition ROS Node -A ROS node for performing hand gesture recognition using MobileNetv2 model trained on HANDS dataset. The node has been tested with Kinectv2 for depth data acquisition with the following drivers: https://github.com/OpenKinect/libfreenect2 and https://github.com/code-iai/iai_kinect2. Assuming that the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: +A ROS node for performing hand gesture recognition using MobileNetv2 model trained on HANDS dataset. +The node has been tested with Kinectv2 for depth data acquisition with the following drivers: https://github.com/OpenKinect/libfreenect2 and https://github.com/code-iai/iai_kinect2. +Assuming that the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: ```shell rosrun perception rgbd_hand_gesture_recognition.py ``` @@ -458,11 +468,15 @@ To get an image from a dataset on the disk, you can start a `image_dataset.py` n ```shell rosrun perception image_dataset.py ``` -By default, it downloads a `nano_MOT20` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(Image, Target)` as elements. You can inspect [the node](./scripts/image_dataset.py) and modify it to your needs for other image datasets. +By default, it downloads a `nano_MOT20` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. +You can create an instance of this node with any `DatasetIterator` object that returns `(Image, Target)` as elements. +You can inspect [the node](./scripts/image_dataset.py) and modify it to your needs for other image datasets. ### Point Cloud Dataset ROS Node To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as: ```shell rosrun perception point_cloud_dataset.py ``` -By default, it downloads a `nano_KITTI` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements. You can inspect [the node](./scripts/point_cloud_dataset.py) and modify it to your needs for other point cloud datasets. +By default, it downloads a `nano_KITTI` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. +You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements. +You can inspect [the node](./scripts/point_cloud_dataset.py) and modify it to your needs for other point cloud datasets. From 95b0ba6a5894da1422b1498d6783fcdfa7136f66 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Mon, 26 Sep 2022 12:44:04 +0300 Subject: [PATCH 13/57] Fix long lines as per suggestions --- projects/opendr_ws/src/perception/README.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 206f7fdd01..ac3bbc2e26 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -26,7 +26,10 @@ Before you can run any of the toolkit's ROS nodes, some prerequisites need to be --- - ### Increase performance by disabling output - Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing `None` in the corresponding output topic. This disables publishing on that topic, forgoing some operations in the node, which might increase its performance. +Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. +Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. +When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing `None` in the corresponding output topic. +This disables publishing on that topic, forgoing some operations in the node, which might increase its performance. _An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations._ @@ -239,7 +242,8 @@ This will pulbish the dataset images to an `/opendr/dataset_image` topic by defa ### Panoptic Segmentation ROS Node -You can find the panoptic segmentation ROS node python script [here](./scripts/panoptic_segmentation_efficient_ps.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [panoptic segmentation tool](../../../../src/opendr/perception/panoptic_segmentation/efficient_ps/efficient_ps_learner.py) whose documentation can be found [here](../../../../docs/reference/efficient-ps.md) and additional information about Efficient PS [here](../../../../src/opendr/perception/panoptic_segmentation/README.md). +You can find the panoptic segmentation ROS node python script [here](./scripts/panoptic_segmentation_efficient_ps.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's [panoptic segmentation tool](../../../../src/opendr/perception/panoptic_segmentation/efficient_ps/efficient_ps_learner.py) whose documentation can be found [here](../../../../docs/reference/efficient-ps.md) and additional information about Efficient PS [here](../../../../src/opendr/perception/panoptic_segmentation/README.md). Instructions for basic usage and visualization of results: From fdd730c0dcb0218b300e8f29e6a5468c8f8c1fdb Mon Sep 17 00:00:00 2001 From: tsampazk Date: Mon, 26 Sep 2022 12:51:28 +0300 Subject: [PATCH 14/57] Removed commented pose estimation usage suggestion --- projects/opendr_ws/src/perception/README.md | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index ac3bbc2e26..12674e0e4f 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -62,17 +62,6 @@ Instructions for basic usage and visualization of results: 3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/image_pose_annotated` or by running `rostopic echo /opendr/poses`, where the node publishes the detected poses in [OpenDR's 2D pose message format](../ros_bridge/msg/OpenDRPose2D.msg). - - ### Fall Detection ROS Node You can find the fall detection ROS node python script [here](./scripts/fall_detection.py) to inspect the code and modify it as you wish to fit your needs. From 032ef364b9a3542786cc6c380bd3208444a8dea4 Mon Sep 17 00:00:00 2001 From: LukasHedegaard Date: Thu, 29 Sep 2022 09:59:13 +0000 Subject: [PATCH 15/57] Update video HAR docs foro ROS node --- projects/opendr_ws/src/perception/README.md | 25 +++++++++++++++------ 1 file changed, 18 insertions(+), 7 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 12674e0e4f..db0eb67ea9 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -304,15 +304,26 @@ The predictied class id and confidence is published under the topic name `/opend Besides, the annotated image is published in `/opendr/image_pose_annotated` as well as the corresponding poses in `/opendr/poses`. ### Video Human Activity Recognition ROS Node - A ROS node for performing Human Activity Recognition using either CoX3D or X3D models pretrained on Kinetics400. -Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: -```shell -rosrun perception video_activity_recognition.py -``` -The predictied class id and confidence is published under the topic name `/opendr/human_activity_recognition`, and the human-readable class name under `/opendr/human_activity_recognition_description`. +Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as follows: + +1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). + +2. You are then ready to start the pose detection node: + ```shell + rosrun perception pose_estimation.py + ``` + The following optional arguments are available: + - `-h, --help`: show a help message and exit + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) + - `-o or --output_category_topic OUTPUT_CATEGORY_TOPIC`: Topic to which we are publishing the recognized activity (default=`"/opendr/human_activity_recognition"`) + - `-od or --output_category_description_topic OUTPUT_CATEGORY_DESRIPTION_TOPIC`: Topic to which we are publishing the ID of the recognized action (default=`/opendr/human_activity_recognition_description`) + - `--model`: Architecture to use for human activity recognition. Choices are "cox3d-s", "cox3d-m", "cox3d-l", "x3d-xs", "x3d-s", "x3d-m", or "x3d-l" (Default: "cox3d-m"). + - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + +3. In a new terminal you can view predictions by running `rostopic echo /opendr/human_activity_recognition`. + ----- ## RGB + Infrared input ---- From d565ac5277cb215465347ccb7fb5a9bc7d218d93 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Thu, 29 Sep 2022 13:43:26 +0300 Subject: [PATCH 16/57] Updated the video human activity recognition section and some other minor fixes --- projects/opendr_ws/src/perception/README.md | 22 +++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index db0eb67ea9..a86964043e 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -238,7 +238,7 @@ Instructions for basic usage and visualization of results: 1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). -2. You are then ready to start the face recognition node +2. You are then ready to start the panoptic segmentation node ```shell rosrun perception panoptic_segmentation_efficient_ps.py @@ -262,7 +262,7 @@ Instructions for basic usage and visualization of results: 1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). -2. You are then ready to start the face recognition node +2. You are then ready to start the semantic segmentation node ```shell rosrun perception semantic_segmentation_bisenet.py @@ -304,14 +304,21 @@ The predictied class id and confidence is published under the topic name `/opend Besides, the annotated image is published in `/opendr/image_pose_annotated` as well as the corresponding poses in `/opendr/poses`. ### Video Human Activity Recognition ROS Node + A ROS node for performing Human Activity Recognition using either CoX3D or X3D models pretrained on Kinetics400. -Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as follows: + +You can find the video human activity recognition ROS node python script [here](./scripts/video_activity_recognition.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's video human activity recognition tools which can be found [here for CoX3D](../../../../src/opendr/perception/activity_recognition/cox3d/cox3d_learner.py) and +[here for X3D](../../../../src/opendr/perception/activity_recognition/x3d/x3d_learner.py) whose documentation can be found [here](../../../../docs/reference/activity-recognition.md). + +Instructions for basic usage and visualization of results: 1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). -2. You are then ready to start the pose detection node: +2. You are then ready to start the video human activity recognition node: + ```shell - rosrun perception pose_estimation.py + rosrun perception video_activity_recognition.py ``` The following optional arguments are available: - `-h, --help`: show a help message and exit @@ -320,9 +327,8 @@ Assuming the drivers have been installed and OpenDR catkin workspace has been so - `-od or --output_category_description_topic OUTPUT_CATEGORY_DESRIPTION_TOPIC`: Topic to which we are publishing the ID of the recognized action (default=`/opendr/human_activity_recognition_description`) - `--model`: Architecture to use for human activity recognition. Choices are "cox3d-s", "cox3d-m", "cox3d-l", "x3d-xs", "x3d-s", "x3d-m", or "x3d-l" (Default: "cox3d-m"). - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) - -3. In a new terminal you can view predictions by running `rostopic echo /opendr/human_activity_recognition`. - + +3. In a new terminal you can view predictions by running `rostopic echo /opendr/human_activity_recognition` and `rostopic echo /opendr/human_activity_recognition_description`. ## RGB + Infrared input From 9012c04abb52ee82176deba6183e0a3b5218ae24 Mon Sep 17 00:00:00 2001 From: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com> Date: Wed, 5 Oct 2022 16:10:52 +0300 Subject: [PATCH 17/57] Fixed italics showing as block --- projects/opendr_ws/src/perception/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index a86964043e..6a63d38e1a 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -31,7 +31,7 @@ Generally, arguments give the option to change the input and output topics, the When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing `None` in the corresponding output topic. This disables publishing on that topic, forgoing some operations in the node, which might increase its performance. - _An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations._ +_An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations._ ---- From d140125c2f5977112e71445492414d14f1713883 Mon Sep 17 00:00:00 2001 From: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com> Date: Wed, 12 Oct 2022 14:32:59 +0300 Subject: [PATCH 18/57] Removed redundant line separators after headers --- projects/opendr_ws/src/perception/README.md | 16 ---------------- 1 file changed, 16 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 6a63d38e1a..6ef16ab3a5 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -6,8 +6,6 @@ This package contains ROS nodes related to the perception package of OpenDR. ## Prerequisites ---- - Before you can run any of the toolkit's ROS nodes, some prerequisites need to be fulfilled: 1. First of all, you need to [set up the required packages and build your workspace.](../../README.md#Setup) 2. Start roscore by opening a new terminal where ROS is sourced properly (`source /opt/ros/noetic/setup.bash`) and run `roscore`. @@ -23,8 +21,6 @@ Before you can run any of the toolkit's ROS nodes, some prerequisites need to be ## Notes ---- - - ### Increase performance by disabling output Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. @@ -37,8 +33,6 @@ _An example would be to disable the output annotated image topic in a node when ---- ## RGB input nodes ----- - ### Pose Estimation ROS Node You can find the pose estimation ROS node python script [here](./scripts/pose_estimation.py) to inspect the code and modify it as you wish to fit your needs. @@ -332,8 +326,6 @@ Instructions for basic usage and visualization of results: ## RGB + Infrared input ----- - ### GEM ROS Node Assuming that you have already [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can @@ -380,8 +372,6 @@ The predictied classes are published to the topic `/opendr/gestures`. ---- ## Point cloud input ----- - ### 3D Object Detection Voxel ROS Node A ROS node for performing Object Detection 3D using PointPillars or TANet methods with either pretrained models on KITTI dataset, or custom trained models. @@ -434,8 +424,6 @@ This will pulbish the dataset images to an `/opendr/dataset_image` topic by defa ---- ## Biosignal input ----- - ### Heart Anomaly Detection ROS Node A ROS node for performing heart anomaly (atrial fibrillation) detection from ecg data using GRU or ANBOF models trained on AF dataset. Assuming that the OpenDR catkin workspace has been sourced, the node can be started as: @@ -447,8 +435,6 @@ with `ECG_TOPIC` specifying the ROS topic to which the node will subscribe, and ---- ## Audio input ----- - ### Speech Command Recognition ROS Node A ROS node for recognizing speech commands from an audio stream using MatchboxNet, EdgeSpeechNets or Quadratic SelfONN models, pretrained on the Google Speech Commands dataset. @@ -467,8 +453,6 @@ The predictions (class id and confidence) are published to the topic `/opendr/sp ---- ## Dataset ROS Nodes ----- - The dataset nodes can be used to publish data from the disk, which is useful to test the functionality without the use of a sensor. Dataset nodes use a provided `DatasetIterator` object that returns a `(Data, Target)` pair. If the type of the `Data` object is correct, the node will transform it into a corresponding ROS message object and publish it to a desired topic. From 08f9ada44cf6e82251524899eb53677e4e7fbf3d Mon Sep 17 00:00:00 2001 From: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com> Date: Thu, 13 Oct 2022 12:28:36 +0300 Subject: [PATCH 19/57] Removed redundant horizontal line from RGBD header --- projects/opendr_ws/src/perception/README.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 6ef16ab3a5..3019697180 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -357,8 +357,6 @@ rosrun perception object_detection_2d_gem.py ---- ## RGBD input ----- - ### RGBD Hand Gesture Recognition ROS Node A ROS node for performing hand gesture recognition using MobileNetv2 model trained on HANDS dataset. From 3dfd1084de028dffb7ef06af6f66349275631b7a Mon Sep 17 00:00:00 2001 From: tsampazk Date: Thu, 13 Oct 2022 13:08:57 +0300 Subject: [PATCH 20/57] Added notes for output visualization and updated pose estimation docs --- projects/opendr_ws/src/perception/README.md | 33 ++++++++++++++++----- 1 file changed, 25 insertions(+), 8 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 3019697180..d37e1ea084 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -21,13 +21,28 @@ Before you can run any of the toolkit's ROS nodes, some prerequisites need to be ## Notes +- ### Display output images with rqt_image_view + For any node that outputs images, `rqt_image_view` can be used to display them by running the following command in a new terminal: + ```shell + rosrun rqt_image_view rqt_image_view + ``` + A window will appear, where the topic that you want to view can be selected from the drop-down menu on the top-left area of the window. + Refer to each node's documentation below to find out the default output image topic, where applicable, and select it on the drop-down menu of rqt_image_view. + +- ### Echo node output + All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command in a new terminal: + ```shell + rostopic echo /topic_name + ``` + You can find out the default topic name for each node, in its documentation below. + - ### Increase performance by disabling output -Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. -Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. -When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing `None` in the corresponding output topic. -This disables publishing on that topic, forgoing some operations in the node, which might increase its performance. + Optionally, nodes can be modified via command line arguments, which are presented for each node separately below. + Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc. + When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing `None` in the corresponding output topic. + This disables publishing on that topic, forgoing some operations in the node, which might increase its performance. -_An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations._ + _An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations._ ---- @@ -36,9 +51,9 @@ _An example would be to disable the output annotated image topic in a node when ### Pose Estimation ROS Node You can find the pose estimation ROS node python script [here](./scripts/pose_estimation.py) to inspect the code and modify it as you wish to fit your needs. -The node makes use of the toolkit's [pose estimation tool](../../../../src/opendr/perception/pose_estimation/lightweight_open_pose/lightweight_open_pose_learner.py) whose documentation can be found [here](../../../../docs/reference/lightweight-open-pose.md). +The node makes use of the toolkit's [pose estimation tool](../../../../src/opendr/perception/pose_estimation/lightweight_open_pose/lightweight_open_pose_learner.py) whose documentation can be found [here](../../../../docs/reference/lightweight-open-pose.md). The node publishes the detected poses in [OpenDR's 2D pose message format](../ros_bridge/msg/OpenDRPose2D.msg). -Instructions for basic usage and visualization of results: +#### Instructions for basic usage: 1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). @@ -54,7 +69,9 @@ Instructions for basic usage and visualization of results: - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy -3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/image_pose_annotated` or by running `rostopic echo /opendr/poses`, where the node publishes the detected poses in [OpenDR's 2D pose message format](../ros_bridge/msg/OpenDRPose2D.msg). +3. Default output topics: + - Output images: `/opendr/image_pose_annotated` + - Detection messages:`/opendr/poses` ### Fall Detection ROS Node From 3af485d1eab53242803c624c197aeed338b8dd19 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Thu, 13 Oct 2022 13:10:41 +0300 Subject: [PATCH 21/57] Added missing space in pose estimation docs --- projects/opendr_ws/src/perception/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index d37e1ea084..5224e15a4d 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -71,7 +71,7 @@ The node makes use of the toolkit's [pose estimation tool](../../../../src/opend 3. Default output topics: - Output images: `/opendr/image_pose_annotated` - - Detection messages:`/opendr/poses` + - Detection messages: `/opendr/poses` ### Fall Detection ROS Node From 5d91c6addaeb3ec79e83dc048aaa48335debe5e3 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Thu, 13 Oct 2022 14:58:52 +0300 Subject: [PATCH 22/57] Updates on formatting for all other applicable nodes' docs and minor fixes --- projects/opendr_ws/src/perception/README.md | 69 +++++++++++++++------ 1 file changed, 49 insertions(+), 20 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 5224e15a4d..727cca768e 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -72,6 +72,8 @@ The node makes use of the toolkit's [pose estimation tool](../../../../src/opend 3. Default output topics: - Output images: `/opendr/image_pose_annotated` - Detection messages: `/opendr/poses` + + For viewing the output, refer to the [notes above.](#notes) ### Fall Detection ROS Node @@ -81,11 +83,11 @@ Fall detection uses the toolkit's pose estimation tool internally. -Instructions for basic usage and visualization of results: +#### Instructions for basic usage: 1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). -2. You are then ready to start the fall detection node +2. You are then ready to start the fall detection node: ```shell rosrun perception fall_detection.py @@ -98,15 +100,19 @@ Instructions for basic usage and visualization of results: - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) - `--accelerate`: Acceleration flag that causes pose estimation that runs internally to run faster but with less accuracy -3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/image_fall_annotated` or by running `rostopic echo /opendr/fallen`, where the node publishes bounding boxes of detected fallen poses. +3. Default output topics: + - Output images: `/opendr/image_fallen_annotated` + - Detection messages: `/opendr/fallen` + + For viewing the output, refer to the [notes above.](#notes) ### Face Detection ROS Node -The face detection ROS node supports both the ResNet and MobileNet versions, of latter of which performs mask recognition as well. +The face detection ROS node supports both the ResNet and MobileNet versions, the latter of which performs masked face detection as well. You can find the face detection ROS node python script [here](./scripts/face_detection_retinaface.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [face detection tool](../../../../src/opendr/perception/object_detection_2d/retinaface/retinaface_learner.py) whose documentation can be found [here](../../../../docs/reference/face-detection-2d-retinaface.md). -Instructions for basic usage and visualization of results: +#### Instructions for basic usage: 1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). @@ -123,18 +129,22 @@ Instructions for basic usage and visualization of results: - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) - `--backbone BACKBONE`: Retinaface backbone, options are either 'mnet' or 'resnet', where 'mnet' detects masked faces as well (default=`resnet`) -3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/image_faces_annotated` or by running `rostopic echo /opendr/faces`, where the node publishes bounding boxes of detected faces. +3. Default output topics: + - Output images: `/opendr/image_faces_annotated` + - Detection messages: `/opendr/faces` + + For viewing the output, refer to the [notes above.](#notes) ### Face Recognition ROS Node You can find the face recognition ROS node python script [here](./scripts/face_recognition.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [face recognition tool](../../../../src/opendr/perception/face_recognition/face_recognition_learner.py) whose documentation can be found [here](../../../../docs/reference/face-recognition.md). -Instructions for basic usage and visualization of results: +#### Instructions for basic usage: 1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). -2. You are then ready to start the face recognition node +2. You are then ready to start the face recognition node: ```shell rosrun perception face_recognition.py @@ -149,7 +159,11 @@ Instructions for basic usage and visualization of results: - `--backbone BACKBONE`: Backbone network (default=`mobilefacenet`) - `--dataset_path DATASET_PATH`: Path of the directory where the images of the faces to be recognized are stored (default=`./database`) -3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/image_face_reco_annotated` or by running `rostopic echo /opendr/face_recognition`. +3. Default output topics: + - Output images: `/opendr/image_face_reco_annotated` + - Detection messages: `/opendr/face_recognition` and `/opendr/face_recognition_id` + + For viewing the output, refer to the [notes above.](#notes) **Notes** @@ -170,12 +184,12 @@ under `/opendr/face_recognition_id`. ### 2D Object Detection ROS Nodes -For 2D object detection, there are several ROS nodes implemented using various algorithms. The generic obejct detectors are SSD, YOLOv3, CenterNet and DETR. +For 2D object detection, there are several ROS nodes implemented using various algorithms. The generic object detectors are SSD, YOLOv3, CenterNet and DETR. You can find the 2D object detection ROS node python scripts here: [SSD node](./scripts/object_detection_2d_ssd.py), [YOLOv3 node](./scripts/object_detection_2d_yolov3.py), [CenterNet node](./scripts/object_detection_2d_centernet.py) and [DETR node](./scripts/object_detection_2d_detr.py), where you can inspect the code and modify it as you wish to fit your needs. The nodes makes use of the toolkit's various 2D object detection tools: [SSD tool](../../../../src/opendr/perception/object_detection_2d/ssd/ssd_learner.py), [YOLOv3 tool](../../../../src/opendr/perception/object_detection_2d/yolov3/yolov3_learner.py), [CenterNet tool](../../../../src/opendr/perception/object_detection_2d/centernet/centernet_learner.py), [DETR tool](../../../../src/opendr/perception/object_detection_2d/detr/detr_learner.py), whose documentation can be found here: [SSD docs](../../../../docs/reference/object-detection-2d-ssd.md), [YOLOv3 docs](../../../../docs/reference/object-detection-2d-yolov3.md), [CenterNet docs](../../../../docs/reference/object-detection-2d-centernet.md), [DETR docs](../../../../docs/reference/detr.md). -Instructions for basic usage and visualization of results: +#### Instructions for basic usage: 1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). @@ -214,7 +228,11 @@ Instructions for basic usage and visualization of results: - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects`) - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) -3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/image_objects_annotated` or by running `rostopic echo /opendr/objects`, where the bounding boxes alone are published. +3. Default output topics: + - Output images: `/opendr/image_objects_annotated` + - Detection messages: `/opendr/objects` + + For viewing the output, refer to the [notes above.](#notes) ### 2D Object Tracking Deep Sort ROS Node @@ -245,11 +263,11 @@ This will pulbish the dataset images to an `/opendr/dataset_image` topic by defa You can find the panoptic segmentation ROS node python script [here](./scripts/panoptic_segmentation_efficient_ps.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [panoptic segmentation tool](../../../../src/opendr/perception/panoptic_segmentation/efficient_ps/efficient_ps_learner.py) whose documentation can be found [here](../../../../docs/reference/efficient-ps.md) and additional information about Efficient PS [here](../../../../src/opendr/perception/panoptic_segmentation/README.md). -Instructions for basic usage and visualization of results: +#### Instructions for basic usage: 1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). -2. You are then ready to start the panoptic segmentation node +2. You are then ready to start the panoptic segmentation node: ```shell rosrun perception panoptic_segmentation_efficient_ps.py @@ -263,17 +281,21 @@ Instructions for basic usage and visualization of results: - `--visualization_topic VISUALIZATION_TOPIC`: publish the panoptic segmentation map as an RGB image on `VISUALIZATION_TOPIC` or a more detailed overview if using the `--detailed_visualization` flag, `None` to stop the node from publishing on this topic (default=`/opendr/panoptic/rgb_visualization`) - `--detailed_visualization`: generate a combined overview of the input RGB image and the semantic, instance, and panoptic segmentation maps and publish it on `OUTPUT_RGB_IMAGE_TOPIC` (default=deactivated) -3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topics `/opendr/panoptic/semantic`, `/opendr/panoptic/instance` and `/opendr/panoptic/rgb_visualization` or by running `rostopic echo /opendr/panoptic/semantic`, `rostopic echo /opendr/panoptic/instance` and `rostopic echo /opendr/panoptic/rgb_visualization`. +3. Default output topics: + - Output images: `/opendr/panoptic/semantic`, `/opendr/panoptic/instance`, `/opendr/panoptic/rgb_visualization` + - Detection messages: `/opendr/panoptic/semantic`, `/opendr/panoptic/instance`, `/opendr/panoptic/rgb_visualization` + + For viewing the output, refer to the [notes above.](#notes) ### Semantic Segmentation ROS Node You can find the semantic segmentation ROS node python script [here](./scripts/semantic_segmentation_bisenet.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [semantic segmentation tool](../../../../src/opendr/perception/semantic_segmentation/bisenet/bisenet_learner.py) whose documentation can be found [here](../../../../docs/reference/semantic-segmentation.md). -Instructions for basic usage and visualization of results: +#### Instructions for basic usage: 1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). -2. You are then ready to start the semantic segmentation node +2. You are then ready to start the semantic segmentation node: ```shell rosrun perception semantic_segmentation_bisenet.py @@ -285,7 +307,11 @@ Instructions for basic usage and visualization of results: - `-v or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic to which we are publishing the heatmap image blended with the input image and a class legend for visualization purposes, `None` to stop the node from publishing on this topic (default=`/opendr/heatmap_visualization`) - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) -3. In a new terminal you can view the annotated image stream by running `rosrun rqt_image_view rqt_image_view` and selecting the topic `/opendr/heatmap_visualization` or by running `rostopic echo /opendr/heatmap`. +3. Default output topics: + - Output images: `/opendr/heatmap`, `/opendr/heatmap_visualization` + - Detection messages: `/opendr/heatmap` + + For viewing the output, refer to the [notes above.](#notes) **Notes** @@ -322,7 +348,7 @@ You can find the video human activity recognition ROS node python script [here]( The node makes use of the toolkit's video human activity recognition tools which can be found [here for CoX3D](../../../../src/opendr/perception/activity_recognition/cox3d/cox3d_learner.py) and [here for X3D](../../../../src/opendr/perception/activity_recognition/x3d/x3d_learner.py) whose documentation can be found [here](../../../../docs/reference/activity-recognition.md). -Instructions for basic usage and visualization of results: +#### Instructions for basic usage: 1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). @@ -338,8 +364,11 @@ Instructions for basic usage and visualization of results: - `-od or --output_category_description_topic OUTPUT_CATEGORY_DESRIPTION_TOPIC`: Topic to which we are publishing the ID of the recognized action (default=`/opendr/human_activity_recognition_description`) - `--model`: Architecture to use for human activity recognition. Choices are "cox3d-s", "cox3d-m", "cox3d-l", "x3d-xs", "x3d-s", "x3d-m", or "x3d-l" (Default: "cox3d-m"). - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + +3. Default output topics: + - Detection messages: `/opendr/human_activity_recognition`, `/opendr/human_activity_recognition_description` -3. In a new terminal you can view predictions by running `rostopic echo /opendr/human_activity_recognition` and `rostopic echo /opendr/human_activity_recognition_description`. + For viewing the output, refer to the [notes above.](#notes) ## RGB + Infrared input From cd66d502c33cc82ac1edc5a8259150485968c8e2 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Fri, 14 Oct 2022 15:38:15 +0300 Subject: [PATCH 23/57] More detailed ros setup instructions --- projects/opendr_ws/README.md | 69 ++++++++++++++------- projects/opendr_ws/src/perception/README.md | 4 +- 2 files changed, 47 insertions(+), 26 deletions(-) diff --git a/projects/opendr_ws/README.md b/projects/opendr_ws/README.md index 80c03f5128..43a768a37e 100755 --- a/projects/opendr_ws/README.md +++ b/projects/opendr_ws/README.md @@ -7,34 +7,55 @@ as well the `ROSBridge` class which provides an interface to convert OpenDR data ones similar to CvBridge. You can find more information in the corresponding [documentation](../../docs/reference/rosbridge.md). -## Setup -For running a minimal working example you can follow the instructions below: +## First time setup +For the initial setup you can follow the instructions below: -0. Source the necessary distribution tools: +1. Open a new terminal window and source the necessary distribution tools: + ```shell + source /opt/ros/noetic/setup.bash + ``` + _For convenience, you can add this line to your `.bashrc` so you don't have to source the tools each time you open a terminal window._ +2. Install the following dependencies, required in order to use the OpenDR ROS tools: + ```shell + sudo apt-get install ros-noetic-vision-msgs ros-noetic-geometry-msgs ros-noetic-sensor-msgs ros-noetic-audio-common-msgs + ``` +3. Navigate to your OpenDR home directory (`~/opendr`) and activate the OpenDR environment using: + ```shell + source bin/activate.sh + ``` + You need to do this step every time before running an OpenDR node. +4. Navigate into the OpenDR ROS workspace:: + ```shell + cd projects/opendr_ws + ``` +5. (Optional) Most nodes with visual input are set up to run with a default USB camera. If you want to use it install the corresponding package and its dependencies: + ```shell + cd src + git clone https://github.com/ros-drivers/usb_cam + cd .. + rosdep install --from-paths src/ --ignore-src + ``` +6. Build the packages inside the workspace: + ```shell + catkin_make + ``` +7. Before running a node, the ROS master node needs to be running, so in a new terminal repeat step 1. and then: + ```shell + roscore + ``` +8. Return to the original terminal and source the workspace. You are now ready to run an OpenDR ROS node: + ```shell + source devel/setup.bash + ``` - ```source /opt/ros/noetic/setup.bash``` +#### After first time setup +For running OpenDR nodes after you have completed the initial setup, you can skip steps 2. and 5. from the list above. -1. Make sure you are inside opendr_ws -2. If you are planning to use a usb camera for the demos, install the corresponding package and its dependencies: +#### More information +After completing the setup you can read more information on the [perception package README](src/perception/README.md), where you can find a concise list of prerequisites and helpful notes to view the output of the nodes or optimize their performance. -```shell -cd src -git clone https://github.com/ros-drivers/usb_cam -cd .. -rosdep install --from-paths src/ --ignore-src -``` -3. Install the following dependencies, required in order to use the OpenDR ROS tools: -```shell -sudo apt-get install ros-noetic-vision-msgs ros-noetic-geometry-msgs ros-noetic-sensor-msgs ros-noetic-audio-common-msgs -``` -4. Build the packages inside workspace -```shell -catkin_make -``` -5. Source the workspace and you are ready to go! -```shell -source devel/setup.bash -``` +#### Node documentation +You can also take a look at the list of tools [below](#structure) and click on the links to navigate directly to documentation for specific nodes with instructions on how to run and modify them. ## Structure diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 727cca768e..5961d7a208 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -7,11 +7,11 @@ This package contains ROS nodes related to the perception package of OpenDR. ## Prerequisites Before you can run any of the toolkit's ROS nodes, some prerequisites need to be fulfilled: -1. First of all, you need to [set up the required packages and build your workspace.](../../README.md#Setup) +1. First of all, you need to [set up the required packages and build your workspace.](../../README.md#first-time-setup) 2. Start roscore by opening a new terminal where ROS is sourced properly (`source /opt/ros/noetic/setup.bash`) and run `roscore`. 3. _(Optional for nodes with [RGB input](#rgb-input-nodes))_ - For basic usage and testing, all the toolkit's ROS nodes that use RGB images are set up to expect input from a basic webcam using the default package `usb_cam` ([instructions to install](../../README.md#Setup)). You can run the webcam node in a new terminal inside `opendr_ws` and with the workspace sourced using: + For basic usage and testing, all the toolkit's ROS nodes that use RGB images are set up to expect input from a basic webcam using the default package `usb_cam` ([instructions to install, step 5.](../../README.md#first-time-setup)). You can run the webcam node in a new terminal inside `opendr_ws` and with the workspace sourced using: ```shell rosrun usb_cam usb_cam_node ``` From 978e35a9a6cca8b82d202143deb51b15b256630f Mon Sep 17 00:00:00 2001 From: tsampazk Date: Fri, 14 Oct 2022 15:41:28 +0300 Subject: [PATCH 24/57] Added skipping of workspace build step --- projects/opendr_ws/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/projects/opendr_ws/README.md b/projects/opendr_ws/README.md index 43a768a37e..883f968227 100755 --- a/projects/opendr_ws/README.md +++ b/projects/opendr_ws/README.md @@ -49,7 +49,7 @@ For the initial setup you can follow the instructions below: ``` #### After first time setup -For running OpenDR nodes after you have completed the initial setup, you can skip steps 2. and 5. from the list above. +For running OpenDR nodes after you have completed the initial setup, you can skip steps 2. and 5. from the list above. You can also skip building the workspace (step 6.) granted it's been already built and no changes were made to the code inside the workspace, e.g. you modified the source code of a node. #### More information After completing the setup you can read more information on the [perception package README](src/perception/README.md), where you can find a concise list of prerequisites and helpful notes to view the output of the nodes or optimize their performance. From cb64d4a31dee6bd1b0a1f796662335a368aacab8 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Fri, 14 Oct 2022 15:59:41 +0300 Subject: [PATCH 25/57] Updated RGBD hand gesture recognition ros node doc --- projects/opendr_ws/src/perception/README.md | 31 ++++++++++++++++----- 1 file changed, 24 insertions(+), 7 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 5961d7a208..e22923385f 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -404,14 +404,31 @@ rosrun perception object_detection_2d_gem.py ## RGBD input ### RGBD Hand Gesture Recognition ROS Node - -A ROS node for performing hand gesture recognition using MobileNetv2 model trained on HANDS dataset. +A ROS node for performing hand gesture recognition using a MobileNetv2 model trained on HANDS dataset. The node has been tested with Kinectv2 for depth data acquisition with the following drivers: https://github.com/OpenKinect/libfreenect2 and https://github.com/code-iai/iai_kinect2. -Assuming that the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: -```shell -rosrun perception rgbd_hand_gesture_recognition.py -``` -The predictied classes are published to the topic `/opendr/gestures`. + +You can find the RGBD hand gesture recognition ROS node python script [here](./scripts/rgbd_hand_gesture_recognition.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's [hand gesture recognition tool](../../../../src/opendr/perception/multimodal_human_centric/rgbd_hand_gesture_learner/rgbd_hand_gesture_learner.py) whose documentation can be found [here](../../../../docs/reference/rgbd-hand-gesture-learner.md). + +#### Instructions for basic usage: + +1. Start the node responsible for publishing images from an RGBD camera. Remember to modify the input topics using the arguments in step 2. if needed. + +2. You are then ready to start the hand gesture recognition node: + ```shell + rosrun perception rgbd_hand_gesture_recognition.py + ``` + The following optional arguments are available: + - `-h, --help`: show a help message and exit + - `--input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/kinect2/qhd/image_color_rect`) + - `--input_depth_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input depth image (default=`/kinect2/qhd/image_depth_rect`) + - `--output_gestures_topic OUTPUT_GESTURES_TOPIC`: Topic name for predicted gesture class (default=`/opendr/gestures`) + - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + +3. Default output topics: + - Detection messages:`/opendr/gestures` + + For viewing the output, refer to the [notes above.](#notes) ---- ## Point cloud input From f4c9de4618f858fe1152b853f941b6e60620569e Mon Sep 17 00:00:00 2001 From: tsampazk Date: Fri, 14 Oct 2022 17:44:40 +0300 Subject: [PATCH 26/57] Updated speech command recognition ros node doc and some minor fixes --- projects/opendr_ws/src/perception/README.md | 43 ++++++++++++++------- 1 file changed, 30 insertions(+), 13 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index e22923385f..5b1cf7cc3c 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -362,7 +362,7 @@ The node makes use of the toolkit's video human activity recognition tools which - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_category_topic OUTPUT_CATEGORY_TOPIC`: Topic to which we are publishing the recognized activity (default=`"/opendr/human_activity_recognition"`) - `-od or --output_category_description_topic OUTPUT_CATEGORY_DESRIPTION_TOPIC`: Topic to which we are publishing the ID of the recognized action (default=`/opendr/human_activity_recognition_description`) - - `--model`: Architecture to use for human activity recognition. Choices are "cox3d-s", "cox3d-m", "cox3d-l", "x3d-xs", "x3d-s", "x3d-m", or "x3d-l" (Default: "cox3d-m"). + - `--model`: Architecture to use for human activity recognition. Choices are `cox3d-s`, `cox3d-m`, `cox3d-l`, `x3d-xs`, `x3d-s`, `x3d-m`, or `x3d-l` (default=`cox3d-m`) - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) 3. Default output topics: @@ -421,7 +421,7 @@ The node makes use of the toolkit's [hand gesture recognition tool](../../../../ The following optional arguments are available: - `-h, --help`: show a help message and exit - `--input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/kinect2/qhd/image_color_rect`) - - `--input_depth_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input depth image (default=`/kinect2/qhd/image_depth_rect`) + - `--input_depth_image_topic INPUT_DEPTH_IMAGE_TOPIC`: topic name for input depth image (default=`/kinect2/qhd/image_depth_rect`) - `--output_gestures_topic OUTPUT_GESTURES_TOPIC`: Topic name for predicted gesture class (default=`/opendr/gestures`) - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) @@ -497,19 +497,36 @@ with `ECG_TOPIC` specifying the ROS topic to which the node will subscribe, and ## Audio input ### Speech Command Recognition ROS Node - + A ROS node for recognizing speech commands from an audio stream using MatchboxNet, EdgeSpeechNets or Quadratic SelfONN models, pretrained on the Google Speech Commands dataset. -Assuming that the OpenDR catkin workspace has been sourced, the node can be started with: -```shell -rosrun perception speech_command_recognition.py INPUT_AUDIO_TOPIC -``` -The following optional arguments are available: -- `--buffer_size BUFFER_SIZE`: set the size of the audio buffer (expected command duration) in seconds, default value **1.5** -- `--model MODEL`: choose the model to use: `matchboxnet` (default value), `edgespeechnets` or `quad_selfonn` -- `--model_path MODEL_PATH`: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server -The predictions (class id and confidence) are published to the topic `/opendr/speech_recognition`. -**Note:** EdgeSpeechNets currently does not have a pretrained model available for download, only local files may be used. +You can find the speech command recognition ROS node python script [here](./scripts/speech_command_recognition.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's speech command recognition tools: [EdgeSpeechNets tool](../../../../src/opendr/perception/speech_recognition/edgespeechnets/edgespeechnets_learner.py), [MatchboxNet tool](../../../../src/opendr/perception/speech_recognition/matchboxnet/matchboxnet_learner.py), [Quadratic SelfONN tool](../../../../src/opendr/perception/speech_recognition/quadraticselfonn/quadraticselfonn_learner.py) whose documentation can be found here: [EdgeSpeechNet docs](../../../../docs/reference/edgespeechnets.md), [MatchboxNet docs](../../../../docs/reference/matchboxnet.md), [Quadratic SelfONN docs](../../../../docs/reference/quadratic-selfonn.md). + +#### Instructions for basic usage: + +1. Start the node responsible for publishing audio. Remember to modify the input topics using the arguments in step 2. if needed. + +2. You are then ready to start the face detection node + + ```shell + rosrun perception speech_command_recognition.py + ``` + The following optional arguments are available: + - `-h, --help`: show a help message and exit + - `--input_audio_topic INPUT_AUDIO_TOPIC`: topic name for input audio (default=`/audio/audio`) + - `--output_speech_command_topic OUTPUT_SPEECH_COMMAND_TOPIC`: topic name for speech command output (default=`/opendr/speech_recognition`) + - `--buffer_size BUFFER_SIZE`: set the size of the audio buffer (expected command duration) in seconds, default value **1.5** + - `--model MODEL`: the model to use, choices are `matchboxnet`, `edgespeechnets` or `quad_selfonn` (default=`matchboxnet`) + - `--model_path MODEL_PATH`: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server + +3. Default output topics: + - Detection messages, class id and confidence: `/opendr/speech_recognition` + + For viewing the output, refer to the [notes above.](#notes) + +**Notes** + +EdgeSpeechNets currently does not have a pretrained model available for download, only local files may be used. ---- ## Dataset ROS Nodes From ee198e05977ad47434fd2395a4d70d6817542f03 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Mon, 17 Oct 2022 13:58:55 +0300 Subject: [PATCH 27/57] Updated heart anomaly detection ros node doc --- projects/opendr_ws/src/perception/README.md | 32 +++++++++++++++++---- 1 file changed, 26 insertions(+), 6 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 5b1cf7cc3c..429b03b8e2 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -486,12 +486,32 @@ This will pulbish the dataset images to an `/opendr/dataset_image` topic by defa ## Biosignal input ### Heart Anomaly Detection ROS Node - -A ROS node for performing heart anomaly (atrial fibrillation) detection from ecg data using GRU or ANBOF models trained on AF dataset. Assuming that the OpenDR catkin workspace has been sourced, the node can be started as: -```shell -rosrun perception heart_anomaly_detection.py ECG_TOPIC MODEL -``` -with `ECG_TOPIC` specifying the ROS topic to which the node will subscribe, and `MODEL` set to either *gru* or *anbof*. The predictied classes are published to the topic `/opendr/heartanomaly`. + +A ROS node for performing heart anomaly (atrial fibrillation) detection from ECG data using GRU or ANBOF models trained on AF dataset. + +You can find the heart anomaly detection ROS node python script [here](./scripts/heart_anomaly_detection.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's heart anomaly detection tools: [ANBOF tool](../../../../src/opendr/perception/heart_anomaly_detection/attention_neural_bag_of_feature/attention_neural_bag_of_feature_learner.py) and [GRU tool](../../../../src/opendr/perception/heart_anomaly_detection/gated_recurrent_unit/gated_recurrent_unit_learner.py), whose documentation can be found here: [ANBOF docs](../../../../docs/reference/attention-neural-bag-of-feature-learner.md) and [GRU docs](../../../../docs/reference/gated-recurrent-unit-learner.md). + +#### Instructions for basic usage: + +1. Start the node responsible for publishing ECG data. + +2. You are then ready to start the heart anomaly detection node: + + ```shell + rosrun perception heart_anomaly_detection.py + ``` + The following optional arguments are available: + - `-h, --help`: show a help message and exit + - `--input_ecg_topic INPUT_ECG_TOPIC`: topic name for input ECG data (default=`/ecg/ecg`) + - `--output_heart_anomaly_topic OUTPUT_HEART_ANOMALY_TOPIC`: topic name for heart anomaly detection (default=`/opendr/heart_anomaly`) + - `--model MODEL`: the model to use, choices are `anbof` or `gru` (default=`anbof`) + - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + +3. Default output topics: + - Detection messages: `/opendr/heart_anomaly` + + For viewing the output, refer to the [notes above.](#notes) ---- ## Audio input From a2e49d87a1509abb920f0e32f50f7a63d904dcb6 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Mon, 17 Oct 2022 15:54:03 +0300 Subject: [PATCH 28/57] Reordered audio section and added RGB + Audio section --- projects/opendr_ws/README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/projects/opendr_ws/README.md b/projects/opendr_ws/README.md index 883f968227..0ce324ba94 100755 --- a/projects/opendr_ws/README.md +++ b/projects/opendr_ws/README.md @@ -78,11 +78,13 @@ Currently, apart from tools, opendr_ws contains the following ROS nodes (categor 1. [End-to-End Multi-Modal Object Detection (GEM)](src/perception/README.md#gem-ros-node) ## RGBD input 1. [RGBD Hand Gesture Recognition](src/perception/README.md#rgbd-hand-gesture-recognition-ros-node) +## RGB + Audio input +1. [Audiovisual Emotion Recognition](src/perception/README.md#audiovisual-emotion-recognition-ros-node) +## Audio input +1. [Speech Command Recognition](src/perception/README.md#speech-command-recognition-ros-node) ## Point cloud input 1. [3D Object Detection Voxel](src/perception/README.md#3d-object-detection-voxel-ros-node) 2. [3D Object Tracking AB3DMOT](src/perception/README.md#3d-object-tracking-ab3dmot-ros-node) 3. [2D Object Tracking FairMOT](src/perception/README.md#2d-object-tracking-fairmot-ros-node) ## Biosignal input 1. [Heart Anomaly Detection](src/perception/README.md#heart-anomaly-detection-ros-node) -## Audio input -1. [Speech Command Recognition](src/perception/README.md#speech-command-recognition-ros-node) From c40f010cba56cdeb3ed83f3b45c73d2c863d71b6 Mon Sep 17 00:00:00 2001 From: tsampazk Date: Mon, 17 Oct 2022 15:54:37 +0300 Subject: [PATCH 29/57] Added audiovisual emotion reco missing doc and reordered audio section --- projects/opendr_ws/src/perception/README.md | 99 +++++++++++++-------- 1 file changed, 64 insertions(+), 35 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 429b03b8e2..669d576d39 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -430,6 +430,70 @@ The node makes use of the toolkit's [hand gesture recognition tool](../../../../ For viewing the output, refer to the [notes above.](#notes) +---- +## RGB + Audio input + +### Audiovisual Emotion Recognition ROS Node + +You can find the audiovisual emotion recognition ROS node python script [here](./scripts/audiovisual_emotion_recognition.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [audiovisual emotion recognition tool](../../../../src/opendr/perception/multimodal_human_centric/audiovisual_emotion_learner/avlearner.py), whose documentation can be found [here](../../../../docs/reference/audiovisual-emotion-recognition-learner.md). + +#### Instructions for basic usage: + +1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). +2. Start the node responsible for publishing audio. Remember to modify the input topics using the arguments in step 2. if needed. +3. You are then ready to start the face detection node + + ```shell + rosrun perception speech_command_recognition.py + ``` + The following optional arguments are available: + - `-h, --help`: show a help message and exit + - `--input_video_topic INPUT_VIDEO_TOPIC`: topic name for input video, expects detected face of size 224x224 (default=`/usb_cam/image_raw`) + - `--input_audio_topic INPUT_AUDIO_TOPIC`: topic name for input audio (default=`/audio/audio`) + - `--output_emotions_topic OUTPUT_EMOTIONS_TOPIC`: topic to which we are publishing the predicted emotion (default=`/opendr/audiovisual_emotion`) + - `--buffer_size BUFFER_SIZE`: length of audio and video in seconds, (default=`3.6`) + - `--model_path MODEL_PATH`: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server + +4. Default output topics: + - Detection messages: `/opendr/audiovisual_emotion` + + For viewing the output, refer to the [notes above.](#notes) + +---- +## Audio input + +### Speech Command Recognition ROS Node + +A ROS node for recognizing speech commands from an audio stream using MatchboxNet, EdgeSpeechNets or Quadratic SelfONN models, pretrained on the Google Speech Commands dataset. + +You can find the speech command recognition ROS node python script [here](./scripts/speech_command_recognition.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's speech command recognition tools: [EdgeSpeechNets tool](../../../../src/opendr/perception/speech_recognition/edgespeechnets/edgespeechnets_learner.py), [MatchboxNet tool](../../../../src/opendr/perception/speech_recognition/matchboxnet/matchboxnet_learner.py), [Quadratic SelfONN tool](../../../../src/opendr/perception/speech_recognition/quadraticselfonn/quadraticselfonn_learner.py) whose documentation can be found here: [EdgeSpeechNet docs](../../../../docs/reference/edgespeechnets.md), [MatchboxNet docs](../../../../docs/reference/matchboxnet.md), [Quadratic SelfONN docs](../../../../docs/reference/quadratic-selfonn.md). + +#### Instructions for basic usage: + +1. Start the node responsible for publishing audio. Remember to modify the input topics using the arguments in step 2, if needed. + +2. You are then ready to start the face detection node + + ```shell + rosrun perception speech_command_recognition.py + ``` + The following optional arguments are available: + - `-h, --help`: show a help message and exit + - `--input_audio_topic INPUT_AUDIO_TOPIC`: topic name for input audio (default=`/audio/audio`) + - `--output_speech_command_topic OUTPUT_SPEECH_COMMAND_TOPIC`: topic name for speech command output (default=`/opendr/speech_recognition`) + - `--buffer_size BUFFER_SIZE`: set the size of the audio buffer (expected command duration) in seconds (default=`1.5`) + - `--model MODEL`: the model to use, choices are `matchboxnet`, `edgespeechnets` or `quad_selfonn` (default=`matchboxnet`) + - `--model_path MODEL_PATH`: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server + +3. Default output topics: + - Detection messages, class id and confidence: `/opendr/speech_recognition` + + For viewing the output, refer to the [notes above.](#notes) + +**Notes** + +EdgeSpeechNets currently does not have a pretrained model available for download, only local files may be used. + ---- ## Point cloud input @@ -513,41 +577,6 @@ The node makes use of the toolkit's heart anomaly detection tools: [ANBOF tool]( For viewing the output, refer to the [notes above.](#notes) ----- -## Audio input - -### Speech Command Recognition ROS Node - -A ROS node for recognizing speech commands from an audio stream using MatchboxNet, EdgeSpeechNets or Quadratic SelfONN models, pretrained on the Google Speech Commands dataset. - -You can find the speech command recognition ROS node python script [here](./scripts/speech_command_recognition.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's speech command recognition tools: [EdgeSpeechNets tool](../../../../src/opendr/perception/speech_recognition/edgespeechnets/edgespeechnets_learner.py), [MatchboxNet tool](../../../../src/opendr/perception/speech_recognition/matchboxnet/matchboxnet_learner.py), [Quadratic SelfONN tool](../../../../src/opendr/perception/speech_recognition/quadraticselfonn/quadraticselfonn_learner.py) whose documentation can be found here: [EdgeSpeechNet docs](../../../../docs/reference/edgespeechnets.md), [MatchboxNet docs](../../../../docs/reference/matchboxnet.md), [Quadratic SelfONN docs](../../../../docs/reference/quadratic-selfonn.md). - -#### Instructions for basic usage: - -1. Start the node responsible for publishing audio. Remember to modify the input topics using the arguments in step 2. if needed. - -2. You are then ready to start the face detection node - - ```shell - rosrun perception speech_command_recognition.py - ``` - The following optional arguments are available: - - `-h, --help`: show a help message and exit - - `--input_audio_topic INPUT_AUDIO_TOPIC`: topic name for input audio (default=`/audio/audio`) - - `--output_speech_command_topic OUTPUT_SPEECH_COMMAND_TOPIC`: topic name for speech command output (default=`/opendr/speech_recognition`) - - `--buffer_size BUFFER_SIZE`: set the size of the audio buffer (expected command duration) in seconds, default value **1.5** - - `--model MODEL`: the model to use, choices are `matchboxnet`, `edgespeechnets` or `quad_selfonn` (default=`matchboxnet`) - - `--model_path MODEL_PATH`: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server - -3. Default output topics: - - Detection messages, class id and confidence: `/opendr/speech_recognition` - - For viewing the output, refer to the [notes above.](#notes) - -**Notes** - -EdgeSpeechNets currently does not have a pretrained model available for download, only local files may be used. - ---- ## Dataset ROS Nodes From f402030e63fdf34b6d5326323814f04ed7f34430 Mon Sep 17 00:00:00 2001 From: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com> Date: Tue, 18 Oct 2022 16:55:42 +0300 Subject: [PATCH 30/57] Added link to csv file with classes-ids for activity recognition --- docs/reference/activity-recognition.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/reference/activity-recognition.md b/docs/reference/activity-recognition.md index 733ba2207e..7c7fae9005 100644 --- a/docs/reference/activity-recognition.md +++ b/docs/reference/activity-recognition.md @@ -2,6 +2,7 @@ The *activity_recognition* module contains the *X3DLearner* and *CoX3DLearner* classes, which inherit from the abstract class *Learner*. +You can find the classes and the corresponding IDs regarding activity recognition [here](https://github.com/opendr-eu/opendr/blob/master/src/opendr/perception/activity_recognition/datasets/kinetics400_classes.csv). ### Class X3DLearner Bases: `engine.learners.Learner` From 50961434d284a7bb3f3c78699befb84759bfbbab Mon Sep 17 00:00:00 2001 From: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com> Date: Tue, 18 Oct 2022 16:57:44 +0300 Subject: [PATCH 31/57] Added link to csv file with class-ids for activity recognition --- projects/opendr_ws/src/perception/README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 669d576d39..6a1d4ff9cd 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -370,6 +370,10 @@ The node makes use of the toolkit's video human activity recognition tools which For viewing the output, refer to the [notes above.](#notes) +**Notes** + +You can find the corresponding IDs regarding activity recognition [here](https://github.com/opendr-eu/opendr/blob/master/src/opendr/perception/activity_recognition/datasets/kinetics400_classes.csv). + ## RGB + Infrared input ### GEM ROS Node From 0aca0cc02d3088632b994b13ddaf26737643c5c5 Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Tue, 8 Nov 2022 14:22:04 +0200 Subject: [PATCH 32/57] Minor improvements --- projects/opendr_ws/README.md | 9 +++++++-- projects/opendr_ws/src/perception/README.md | 5 +++-- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/projects/opendr_ws/README.md b/projects/opendr_ws/README.md index 0ce324ba94..52852da48c 100755 --- a/projects/opendr_ws/README.md +++ b/projects/opendr_ws/README.md @@ -10,6 +10,8 @@ ones similar to CvBridge. You can find more information in the corresponding [do ## First time setup For the initial setup you can follow the instructions below: +0. Make sure ROS noetic is installed: http://wiki.ros.org/noetic/Installation/Ubuntu (desktop full install) + 1. Open a new terminal window and source the necessary distribution tools: ```shell source /opt/ros/noetic/setup.bash @@ -39,14 +41,15 @@ For the initial setup you can follow the instructions below: ```shell catkin_make ``` -7. Before running a node, the ROS master node needs to be running, so in a new terminal repeat step 1. and then: +7. Before running a node, the ROS master node needs to be running, so in a new terminal repeat step 1. and then run: ```shell roscore ``` -8. Return to the original terminal and source the workspace. You are now ready to run an OpenDR ROS node: +8. Return to the original terminal and source the workspace: ```shell source devel/setup.bash ``` + You are now ready to run an OpenDR ROS node. Keep reading below. #### After first time setup For running OpenDR nodes after you have completed the initial setup, you can skip steps 2. and 5. from the list above. You can also skip building the workspace (step 6.) granted it's been already built and no changes were made to the code inside the workspace, e.g. you modified the source code of a node. @@ -57,6 +60,8 @@ After completing the setup you can read more information on the [perception pack #### Node documentation You can also take a look at the list of tools [below](#structure) and click on the links to navigate directly to documentation for specific nodes with instructions on how to run and modify them. +**For first time users we suggest reading the introductory sections (prerequisites and notes) first.** + ## Structure Currently, apart from tools, opendr_ws contains the following ROS nodes (categorized according to the input they receive): diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 6a1d4ff9cd..c9351212ea 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -8,7 +8,7 @@ This package contains ROS nodes related to the perception package of OpenDR. Before you can run any of the toolkit's ROS nodes, some prerequisites need to be fulfilled: 1. First of all, you need to [set up the required packages and build your workspace.](../../README.md#first-time-setup) -2. Start roscore by opening a new terminal where ROS is sourced properly (`source /opt/ros/noetic/setup.bash`) and run `roscore`. +2. Start roscore by opening a new terminal where ROS is sourced properly (`source /opt/ros/noetic/setup.bash`) and run `roscore`, if you haven't already done so. 3. _(Optional for nodes with [RGB input](#rgb-input-nodes))_ For basic usage and testing, all the toolkit's ROS nodes that use RGB images are set up to expect input from a basic webcam using the default package `usb_cam` ([instructions to install, step 5.](../../README.md#first-time-setup)). You can run the webcam node in a new terminal inside `opendr_ws` and with the workspace sourced using: @@ -51,7 +51,8 @@ Before you can run any of the toolkit's ROS nodes, some prerequisites need to be ### Pose Estimation ROS Node You can find the pose estimation ROS node python script [here](./scripts/pose_estimation.py) to inspect the code and modify it as you wish to fit your needs. -The node makes use of the toolkit's [pose estimation tool](../../../../src/opendr/perception/pose_estimation/lightweight_open_pose/lightweight_open_pose_learner.py) whose documentation can be found [here](../../../../docs/reference/lightweight-open-pose.md). The node publishes the detected poses in [OpenDR's 2D pose message format](../ros_bridge/msg/OpenDRPose2D.msg). +The node makes use of the toolkit's [pose estimation tool](../../../../src/opendr/perception/pose_estimation/lightweight_open_pose/lightweight_open_pose_learner.py) whose documentation can be found [here](../../../../docs/reference/lightweight-open-pose.md). +The node publishes the detected poses in [OpenDR's 2D pose message format](../ros_bridge/msg/OpenDRPose2D.msg), which saves a list of [OpenDR's keypoint message format](../ros_bridge/msg/OpenDRPose2DKeypoint.msg). #### Instructions for basic usage: From 457eb7a1acb7c695430613bf4dfee38d34cdba51 Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 9 Nov 2022 16:24:00 +0200 Subject: [PATCH 33/57] Several minor fixes and landmark-based facial expression recognition --- projects/opendr_ws/src/perception/README.md | 108 +++++++++++++------- 1 file changed, 73 insertions(+), 35 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index c9351212ea..b3bc76441e 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -11,11 +11,13 @@ Before you can run any of the toolkit's ROS nodes, some prerequisites need to be 2. Start roscore by opening a new terminal where ROS is sourced properly (`source /opt/ros/noetic/setup.bash`) and run `roscore`, if you haven't already done so. 3. _(Optional for nodes with [RGB input](#rgb-input-nodes))_ - For basic usage and testing, all the toolkit's ROS nodes that use RGB images are set up to expect input from a basic webcam using the default package `usb_cam` ([instructions to install, step 5.](../../README.md#first-time-setup)). You can run the webcam node in a new terminal inside `opendr_ws` and with the workspace sourced using: + For basic usage and testing, all the toolkit's ROS nodes that use RGB images are set up to expect input from a basic webcam using the default package `usb_cam` ([instructions to install, step 5.](../../README.md#first-time-setup)). + You can run the webcam node in a new terminal inside `opendr_ws` and with the workspace sourced using: ```shell rosrun usb_cam usb_cam_node ``` - By default, the USB cam node publishes images on `/usb_cam/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.** + By default, the USB cam node publishes images on `/usb_cam/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. + As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.** --- @@ -67,8 +69,8 @@ The node publishes the detected poses in [OpenDR's 2D pose message format](../ro - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`) - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`) - - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) - - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy + - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `--accelerate`: acceleration flag that causes pose estimation to run faster but with less accuracy 3. Default output topics: - Output images: `/opendr/image_pose_annotated` @@ -98,8 +100,8 @@ Fall detection uses the toolkit's pose estimation tool internally. - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_fallen_annotated`) - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/fallen`) - - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) - - `--accelerate`: Acceleration flag that causes pose estimation that runs internally to run faster but with less accuracy + - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `--accelerate`: acceleration flag that causes pose estimation that runs internally to run faster but with less accuracy 3. Default output topics: - Output images: `/opendr/image_fallen_annotated` @@ -111,7 +113,8 @@ Fall detection uses the toolkit's pose estimation tool internally. The face detection ROS node supports both the ResNet and MobileNet versions, the latter of which performs masked face detection as well. -You can find the face detection ROS node python script [here](./scripts/face_detection_retinaface.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [face detection tool](../../../../src/opendr/perception/object_detection_2d/retinaface/retinaface_learner.py) whose documentation can be found [here](../../../../docs/reference/face-detection-2d-retinaface.md). +You can find the face detection ROS node python script [here](./scripts/face_detection_retinaface.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's [face detection tool](../../../../src/opendr/perception/object_detection_2d/retinaface/retinaface_learner.py) whose documentation can be found [here](../../../../docs/reference/face-detection-2d-retinaface.md). #### Instructions for basic usage: @@ -127,8 +130,8 @@ You can find the face detection ROS node python script [here](./scripts/face_det - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_faces_annotated`) - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/faces`) - - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) - - `--backbone BACKBONE`: Retinaface backbone, options are either 'mnet' or 'resnet', where 'mnet' detects masked faces as well (default=`resnet`) + - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `--backbone BACKBONE`: retinaface backbone, options are either `mnet` or `resnet`, where `mnet` detects masked faces as well (default=`resnet`) 3. Default output topics: - Output images: `/opendr/image_faces_annotated` @@ -156,9 +159,9 @@ The node makes use of the toolkit's [face recognition tool](../../../../src/open - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_face_reco_annotated`) - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/face_recognition`) - `-id or --detections_id_topic DETECTIONS_ID_TOPIC`: topic name for detection ID messages, `None` to stop the node from publishing on this topic (default=`/opendr/face_recognition_id`) - - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) - - `--backbone BACKBONE`: Backbone network (default=`mobilefacenet`) - - `--dataset_path DATASET_PATH`: Path of the directory where the images of the faces to be recognized are stored (default=`./database`) + - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `--backbone BACKBONE`: backbone network (default=`mobilefacenet`) + - `--dataset_path DATASET_PATH`: path of the directory where the images of the faces to be recognized are stored (default=`./database`) 3. Default output topics: - Output images: `/opendr/image_face_reco_annotated` @@ -187,8 +190,11 @@ under `/opendr/face_recognition_id`. For 2D object detection, there are several ROS nodes implemented using various algorithms. The generic object detectors are SSD, YOLOv3, CenterNet and DETR. -You can find the 2D object detection ROS node python scripts here: [SSD node](./scripts/object_detection_2d_ssd.py), [YOLOv3 node](./scripts/object_detection_2d_yolov3.py), [CenterNet node](./scripts/object_detection_2d_centernet.py) and [DETR node](./scripts/object_detection_2d_detr.py), where you can inspect the code and modify it as you wish to fit your needs. -The nodes makes use of the toolkit's various 2D object detection tools: [SSD tool](../../../../src/opendr/perception/object_detection_2d/ssd/ssd_learner.py), [YOLOv3 tool](../../../../src/opendr/perception/object_detection_2d/yolov3/yolov3_learner.py), [CenterNet tool](../../../../src/opendr/perception/object_detection_2d/centernet/centernet_learner.py), [DETR tool](../../../../src/opendr/perception/object_detection_2d/detr/detr_learner.py), whose documentation can be found here: [SSD docs](../../../../docs/reference/object-detection-2d-ssd.md), [YOLOv3 docs](../../../../docs/reference/object-detection-2d-yolov3.md), [CenterNet docs](../../../../docs/reference/object-detection-2d-centernet.md), [DETR docs](../../../../docs/reference/detr.md). +You can find the 2D object detection ROS node python scripts here: [SSD node](./scripts/object_detection_2d_ssd.py), [YOLOv3 node](./scripts/object_detection_2d_yolov3.py), [CenterNet node](./scripts/object_detection_2d_centernet.py) and [DETR node](./scripts/object_detection_2d_detr.py), +where you can inspect the code and modify it as you wish to fit your needs. +The nodes makes use of the toolkit's various 2D object detection tools: [SSD tool](../../../../src/opendr/perception/object_detection_2d/ssd/ssd_learner.py), [YOLOv3 tool](../../../../src/opendr/perception/object_detection_2d/yolov3/yolov3_learner.py), +[CenterNet tool](../../../../src/opendr/perception/object_detection_2d/centernet/centernet_learner.py), [DETR tool](../../../../src/opendr/perception/object_detection_2d/detr/detr_learner.py), whose documentation can be found here: [SSD docs](../../../../docs/reference/object-detection-2d-ssd.md), +[YOLOv3 docs](../../../../docs/reference/object-detection-2d-yolov3.md), [CenterNet docs](../../../../docs/reference/object-detection-2d-centernet.md), [DETR docs](../../../../docs/reference/detr.md). #### Instructions for basic usage: @@ -262,7 +268,8 @@ This will pulbish the dataset images to an `/opendr/dataset_image` topic by defa ### Panoptic Segmentation ROS Node You can find the panoptic segmentation ROS node python script [here](./scripts/panoptic_segmentation_efficient_ps.py) to inspect the code and modify it as you wish to fit your needs. -The node makes use of the toolkit's [panoptic segmentation tool](../../../../src/opendr/perception/panoptic_segmentation/efficient_ps/efficient_ps_learner.py) whose documentation can be found [here](../../../../docs/reference/efficient-ps.md) and additional information about Efficient PS [here](../../../../src/opendr/perception/panoptic_segmentation/README.md). +The node makes use of the toolkit's [panoptic segmentation tool](../../../../src/opendr/perception/panoptic_segmentation/efficient_ps/efficient_ps_learner.py) whose documentation can be found [here](../../../../docs/reference/efficient-ps.md) +and additional information about Efficient PS [here](../../../../src/opendr/perception/panoptic_segmentation/README.md). #### Instructions for basic usage: @@ -290,7 +297,8 @@ The node makes use of the toolkit's [panoptic segmentation tool](../../../../src ### Semantic Segmentation ROS Node -You can find the semantic segmentation ROS node python script [here](./scripts/semantic_segmentation_bisenet.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [semantic segmentation tool](../../../../src/opendr/perception/semantic_segmentation/bisenet/bisenet_learner.py) whose documentation can be found [here](../../../../docs/reference/semantic-segmentation.md). +You can find the semantic segmentation ROS node python script [here](./scripts/semantic_segmentation_bisenet.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's [semantic segmentation tool](../../../../src/opendr/perception/semantic_segmentation/bisenet/bisenet_learner.py) whose documentation can be found [here](../../../../docs/reference/semantic-segmentation.md). #### Instructions for basic usage: @@ -306,7 +314,7 @@ You can find the semantic segmentation ROS node python script [here](./scripts/s - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_heatmap_topic OUTPUT_HEATMAP_TOPIC`: topic to which we are publishing the heatmap in the form of a ROS image containing class IDs, `None` to stop the node from publishing on this topic (default=`/opendr/heatmap`) - `-v or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic to which we are publishing the heatmap image blended with the input image and a class legend for visualization purposes, `None` to stop the node from publishing on this topic (default=`/opendr/heatmap_visualization`) - - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) 3. Default output topics: - Output images: `/opendr/heatmap`, `/opendr/heatmap_visualization` @@ -323,17 +331,40 @@ On the table below you can find the detectable classes and their corresponding I | **ID** | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | ### Landmark-based Facial Expression Recognition ROS Node - + A ROS node for performing Landmark-based Facial Expression Recognition using the pretrained model PST-BLN on AFEW, CK+ or Oulu-CASIA datasets. -Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: -```shell -rosrun perception landmark_based_facial_expression_recognition.py -``` -The predictied class id and confidence is published under the topic name `/opendr/landmark_based_expression_recognition`, and the human-readable class name under `/opendr/landmark_based_expression_recognition_description`. + +You can find the landmark-based facial expression recognition ROS node python script [here](./scripts/landmark_based_facial_expression_recognition.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's landmark-based facial expression recognition tool which can be found [here](../../../../src/opendr/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/progressive_spatio_temporal_bln_learner.py) +whose documentation can be found [here](../../../../docs/reference/landmark-based-facial-expression-recognition.md). + +#### Instructions for basic usage: + +1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). + +2. You are then ready to start the landmark-based facial expression recognition node: + + ```shell + rosrun perception landmark_based_facial_expression_recognition.py + ``` + The following optional arguments are available: + - `-h, --help`: show a help message and exit + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) + - `-o or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic to which we are publishing the recognized facial expression category info, `None` to stop the node from publishing on this topic (default=`"/opendr/landmark_expression_recognition"`) + - `-d or --output_category_description_topic OUTPUT_CATEGORY_DESRIPTION_TOPIC`: topic to which we are publishing the description of the recognized facial expression, `None` to stop the node from publishing on this topic (default=`/opendr/landmark_expression_recognition_description`) + - `--model`: architecture to use for facial expression recognition, options are `pstbln_ck+`, `pstbln_casia`, `pstbln_afew` (default=`pstbln_afew`) + - `-s --shape_predictor SHAPE_PREDICTOR`: shape predictor (landmark_extractor) to use (default=`./predictor_path`) + - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + +3. Default output topics: + - Detection messages: `/opendr/landmark_expression_recognition`, `/opendr/landmark_expression_recognition_description` + + For viewing the output, refer to the [notes above.](#notes) ### Skeleton-based Human Action Recognition ROS Node -A ROS node for performing Skeleton-based Human Action Recognition using either ST-GCN or PST-GCN models pretrained on NTU-RGBD-60 dataset. The human body poses of the image are first extracted by the light-weight Openpose method which is implemented in the toolkit, and they are passed to the skeleton-based action recognition method to be categorized. +A ROS node for performing Skeleton-based Human Action Recognition using either ST-GCN or PST-GCN models pretrained on NTU-RGBD-60 dataset. +The human body poses of the image are first extracted by the light-weight Openpose method which is implemented in the toolkit, and they are passed to the skeleton-based action recognition method to be categorized. Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: ```shell rosrun perception skeleton_based_action_recognition.py @@ -361,10 +392,10 @@ The node makes use of the toolkit's video human activity recognition tools which The following optional arguments are available: - `-h, --help`: show a help message and exit - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - - `-o or --output_category_topic OUTPUT_CATEGORY_TOPIC`: Topic to which we are publishing the recognized activity (default=`"/opendr/human_activity_recognition"`) - - `-od or --output_category_description_topic OUTPUT_CATEGORY_DESRIPTION_TOPIC`: Topic to which we are publishing the ID of the recognized action (default=`/opendr/human_activity_recognition_description`) - - `--model`: Architecture to use for human activity recognition. Choices are `cox3d-s`, `cox3d-m`, `cox3d-l`, `x3d-xs`, `x3d-s`, `x3d-m`, or `x3d-l` (default=`cox3d-m`) - - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `-o or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic to which we are publishing the recognized activity, `None` to stop the node from publishing on this topic (default=`"/opendr/human_activity_recognition"`) + - `-od or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC`: topic to which we are publishing the ID of the recognized action, `None` to stop the node from publishing on this topic (default=`/opendr/human_activity_recognition_description`) + - `--model`: architecture to use for human activity recognition, options are `cox3d-s`, `cox3d-m`, `cox3d-l`, `x3d-xs`, `x3d-s`, `x3d-m`, or `x3d-l` (default=`cox3d-m`) + - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) 3. Default output topics: - Detection messages: `/opendr/human_activity_recognition`, `/opendr/human_activity_recognition_description` @@ -427,8 +458,8 @@ The node makes use of the toolkit's [hand gesture recognition tool](../../../../ - `-h, --help`: show a help message and exit - `--input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/kinect2/qhd/image_color_rect`) - `--input_depth_image_topic INPUT_DEPTH_IMAGE_TOPIC`: topic name for input depth image (default=`/kinect2/qhd/image_depth_rect`) - - `--output_gestures_topic OUTPUT_GESTURES_TOPIC`: Topic name for predicted gesture class (default=`/opendr/gestures`) - - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `--output_gestures_topic OUTPUT_GESTURES_TOPIC`: topic name for predicted gesture class (default=`/opendr/gestures`) + - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) 3. Default output topics: - Detection messages:`/opendr/gestures` @@ -440,7 +471,8 @@ The node makes use of the toolkit's [hand gesture recognition tool](../../../../ ### Audiovisual Emotion Recognition ROS Node -You can find the audiovisual emotion recognition ROS node python script [here](./scripts/audiovisual_emotion_recognition.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [audiovisual emotion recognition tool](../../../../src/opendr/perception/multimodal_human_centric/audiovisual_emotion_learner/avlearner.py), whose documentation can be found [here](../../../../docs/reference/audiovisual-emotion-recognition-learner.md). +You can find the audiovisual emotion recognition ROS node python script [here](./scripts/audiovisual_emotion_recognition.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's [audiovisual emotion recognition tool](../../../../src/opendr/perception/multimodal_human_centric/audiovisual_emotion_learner/avlearner.py), whose documentation can be found [here](../../../../docs/reference/audiovisual-emotion-recognition-learner.md). #### Instructions for basic usage: @@ -471,7 +503,9 @@ You can find the audiovisual emotion recognition ROS node python script [here](. A ROS node for recognizing speech commands from an audio stream using MatchboxNet, EdgeSpeechNets or Quadratic SelfONN models, pretrained on the Google Speech Commands dataset. -You can find the speech command recognition ROS node python script [here](./scripts/speech_command_recognition.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's speech command recognition tools: [EdgeSpeechNets tool](../../../../src/opendr/perception/speech_recognition/edgespeechnets/edgespeechnets_learner.py), [MatchboxNet tool](../../../../src/opendr/perception/speech_recognition/matchboxnet/matchboxnet_learner.py), [Quadratic SelfONN tool](../../../../src/opendr/perception/speech_recognition/quadraticselfonn/quadraticselfonn_learner.py) whose documentation can be found here: [EdgeSpeechNet docs](../../../../docs/reference/edgespeechnets.md), [MatchboxNet docs](../../../../docs/reference/matchboxnet.md), [Quadratic SelfONN docs](../../../../docs/reference/quadratic-selfonn.md). +You can find the speech command recognition ROS node python script [here](./scripts/speech_command_recognition.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's speech command recognition tools: [EdgeSpeechNets tool](../../../../src/opendr/perception/speech_recognition/edgespeechnets/edgespeechnets_learner.py), [MatchboxNet tool](../../../../src/opendr/perception/speech_recognition/matchboxnet/matchboxnet_learner.py), [Quadratic SelfONN tool](../../../../src/opendr/perception/speech_recognition/quadraticselfonn/quadraticselfonn_learner.py) whose documentation can be found here: +[EdgeSpeechNet docs](../../../../docs/reference/edgespeechnets.md), [MatchboxNet docs](../../../../docs/reference/matchboxnet.md), [Quadratic SelfONN docs](../../../../docs/reference/quadratic-selfonn.md). #### Instructions for basic usage: @@ -535,7 +569,9 @@ This will pulbish the dataset point clouds to a `/opendr/dataset_point_cloud` to ### 2D Object Tracking FairMOT ROS Node -A ROS node for performing Object Tracking 2D using FairMOT with either pretrained models on MOT dataset, or custom trained models. The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) +A ROS node for performing Object Tracking 2D using FairMOT with either pretrained models on MOT dataset, or custom trained models. +The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). +Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: ```shell rosrun perception object_tracking_2d_fair_mot.py @@ -559,7 +595,9 @@ This will pulbish the dataset images to an `/opendr/dataset_image` topic by defa A ROS node for performing heart anomaly (atrial fibrillation) detection from ECG data using GRU or ANBOF models trained on AF dataset. You can find the heart anomaly detection ROS node python script [here](./scripts/heart_anomaly_detection.py) to inspect the code and modify it as you wish to fit your needs. -The node makes use of the toolkit's heart anomaly detection tools: [ANBOF tool](../../../../src/opendr/perception/heart_anomaly_detection/attention_neural_bag_of_feature/attention_neural_bag_of_feature_learner.py) and [GRU tool](../../../../src/opendr/perception/heart_anomaly_detection/gated_recurrent_unit/gated_recurrent_unit_learner.py), whose documentation can be found here: [ANBOF docs](../../../../docs/reference/attention-neural-bag-of-feature-learner.md) and [GRU docs](../../../../docs/reference/gated-recurrent-unit-learner.md). +The node makes use of the toolkit's heart anomaly detection tools: [ANBOF tool](../../../../src/opendr/perception/heart_anomaly_detection/attention_neural_bag_of_feature/attention_neural_bag_of_feature_learner.py) and +[GRU tool](../../../../src/opendr/perception/heart_anomaly_detection/gated_recurrent_unit/gated_recurrent_unit_learner.py), whose documentation can be found here: +[ANBOF docs](../../../../docs/reference/attention-neural-bag-of-feature-learner.md) and [GRU docs](../../../../docs/reference/gated-recurrent-unit-learner.md). #### Instructions for basic usage: @@ -575,7 +613,7 @@ The node makes use of the toolkit's heart anomaly detection tools: [ANBOF tool]( - `--input_ecg_topic INPUT_ECG_TOPIC`: topic name for input ECG data (default=`/ecg/ecg`) - `--output_heart_anomaly_topic OUTPUT_HEART_ANOMALY_TOPIC`: topic name for heart anomaly detection (default=`/opendr/heart_anomaly`) - `--model MODEL`: the model to use, choices are `anbof` or `gru` (default=`anbof`) - - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) 3. Default output topics: - Detection messages: `/opendr/heart_anomaly` From 0ed9f80f8ba96f777b0d5ee5b69a4dcb1f256bea Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Thu, 10 Nov 2022 12:55:16 +0200 Subject: [PATCH 34/57] Skeleton-based human action recognition and minor fixes --- projects/opendr_ws/src/perception/README.md | 46 ++++++++++++++++----- 1 file changed, 35 insertions(+), 11 deletions(-) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index b3bc76441e..9ea85ac3a2 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -332,7 +332,7 @@ On the table below you can find the detectable classes and their corresponding I ### Landmark-based Facial Expression Recognition ROS Node -A ROS node for performing Landmark-based Facial Expression Recognition using the pretrained model PST-BLN on AFEW, CK+ or Oulu-CASIA datasets. +A ROS node for performing landmark-based facial expression recognition using the pretrained model PST-BLN on AFEW, CK+ or Oulu-CASIA datasets. You can find the landmark-based facial expression recognition ROS node python script [here](./scripts/landmark_based_facial_expression_recognition.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's landmark-based facial expression recognition tool which can be found [here](../../../../src/opendr/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/progressive_spatio_temporal_bln_learner.py) @@ -362,19 +362,43 @@ whose documentation can be found [here](../../../../docs/reference/landmark-base For viewing the output, refer to the [notes above.](#notes) ### Skeleton-based Human Action Recognition ROS Node - -A ROS node for performing Skeleton-based Human Action Recognition using either ST-GCN or PST-GCN models pretrained on NTU-RGBD-60 dataset. -The human body poses of the image are first extracted by the light-weight Openpose method which is implemented in the toolkit, and they are passed to the skeleton-based action recognition method to be categorized. -Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: -```shell -rosrun perception skeleton_based_action_recognition.py -``` -The predictied class id and confidence is published under the topic name `/opendr/skeleton_based_action_recognition`, and the human-readable class name under `/opendr/skeleton_based_action_recognition_description`. -Besides, the annotated image is published in `/opendr/image_pose_annotated` as well as the corresponding poses in `/opendr/poses`. + +A ROS node for performing skeleton-based human action recognition using either ST-GCN or PST-GCN models pretrained on NTU-RGBD-60 dataset. +The human body poses of the image are first extracted by the lightweight OpenPose method which is implemented in the toolkit, and they are passed to the skeleton-based action recognition method to be categorized. + +You can find the skeleton-based human action recognition ROS node python script [here](./scripts/skeleton_based_action_recognition.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's skeleton-based human action recognition tool which can be found [here for ST-GCN](../../../../src/opendr/perception/skeleton_based_action_recognition/spatio_temporal_gcn_learner.py) +and [here for PST-GCN](../../../../src/opendr/perception/skeleton_based_action_recognition/progressive_spatio_temporal_gcn_learner.py) +whose documentation can be found [here](../../../../docs/reference/skeleton-based-action-recognition.md). + +#### Instructions for basic usage: + +1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). + +2. You are then ready to start the skeleton-based human action recognition node: + + ```shell + rosrun perception skeleton_based_action_recognition.py + ``` + The following optional arguments are available: + - `-h, --help`: show a help message and exit + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) + - `-c or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic name for recognized action category, `None` to stop the node from publishing on this topic (default=`"/opendr/skeleton_recognized_action"`) + - `-d or --output_category_description_topic OUTPUT_CATEGORY_DESRIPTION_TOPIC`: topic name for description of the recognized action category, `None` to stop the node from publishing on this topic (default=`/opendr/skeleton_recognized_action_description`) + - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output pose-annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`) + - `-p or --pose_annotations_topic POSE_ANNOTATIONS_TOPIC`: topic name for pose annotations, `None` to stop the node from publishing on this topic (default=`/opendr/poses`) + - `--model`: model to use, options are `stgcn` or `pstgcn`, (default=`stgcn`) + - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + +3. Default output topics: + - Detection messages: `/opendr/skeleton_based_action_recognition`, `/opendr/skeleton_based_action_recognition_description`, `/opendr/poses` + - Output images: `/opendr/image_pose_annotated` + + For viewing the output, refer to the [notes above.](#notes) ### Video Human Activity Recognition ROS Node -A ROS node for performing Human Activity Recognition using either CoX3D or X3D models pretrained on Kinetics400. +A ROS node for performing human activity recognition using either CoX3D or X3D models pretrained on Kinetics400. You can find the video human activity recognition ROS node python script [here](./scripts/video_activity_recognition.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's video human activity recognition tools which can be found [here for CoX3D](../../../../src/opendr/perception/activity_recognition/cox3d/cox3d_learner.py) and From 2861e2d00dd72993cb852b7868201c29a0f01afd Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 23 Nov 2022 13:33:08 +0200 Subject: [PATCH 35/57] Moved fair mot in rgb input section --- projects/opendr_ws/README.md | 12 +++---- projects/opendr_ws/src/perception/README.md | 40 ++++++++++----------- 2 files changed, 26 insertions(+), 26 deletions(-) diff --git a/projects/opendr_ws/README.md b/projects/opendr_ws/README.md index 52852da48c..b5d9449f2d 100755 --- a/projects/opendr_ws/README.md +++ b/projects/opendr_ws/README.md @@ -74,11 +74,12 @@ Currently, apart from tools, opendr_ws contains the following ROS nodes (categor 4. [Face Recognition](src/perception/README.md#face-recognition-ros-node) 5. [2D Object Detection](src/perception/README.md#2d-object-detection-ros-nodes) 6. [2D Object Tracking - Deep Sort](src/perception/README.md#2d-object-tracking-deep-sort-ros-node) -7. [Panoptic Segmentation](src/perception/README.md#panoptic-segmentation-ros-node) -8. [Semantic Segmentation](src/perception/README.md#semantic-segmentation-ros-node) -9. [Landmark-based Facial Expression Recognition](src/perception/README.md#landmark-based-facial-expression-recognition-ros-node) -10. [Skeleton-based Human Action Recognition](src/perception/README.md#skeleton-based-human-action-recognition-ros-node) -11. [Video Human Activity Recognition](src/perception/README.md#video-human-activity-recognition-ros-node) +7. [2D Object Tracking FairMOT](src/perception/README.md#2d-object-tracking-fairmot-ros-node) +8. [Panoptic Segmentation](src/perception/README.md#panoptic-segmentation-ros-node) +9. [Semantic Segmentation](src/perception/README.md#semantic-segmentation-ros-node) +10. [Landmark-based Facial Expression Recognition](src/perception/README.md#landmark-based-facial-expression-recognition-ros-node) +11. [Skeleton-based Human Action Recognition](src/perception/README.md#skeleton-based-human-action-recognition-ros-node) +12. [Video Human Activity Recognition](src/perception/README.md#video-human-activity-recognition-ros-node) ## RGB + Infrared input 1. [End-to-End Multi-Modal Object Detection (GEM)](src/perception/README.md#gem-ros-node) ## RGBD input @@ -90,6 +91,5 @@ Currently, apart from tools, opendr_ws contains the following ROS nodes (categor ## Point cloud input 1. [3D Object Detection Voxel](src/perception/README.md#3d-object-detection-voxel-ros-node) 2. [3D Object Tracking AB3DMOT](src/perception/README.md#3d-object-tracking-ab3dmot-ros-node) -3. [2D Object Tracking FairMOT](src/perception/README.md#2d-object-tracking-fairmot-ros-node) ## Biosignal input 1. [Heart Anomaly Detection](src/perception/README.md#heart-anomaly-detection-ros-node) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index 9ea85ac3a2..d2e54939e0 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -265,6 +265,26 @@ rosrun perception image_dataset.py ``` This will pulbish the dataset images to an `/opendr/dataset_image` topic by default, which means that the `input_image_topic` should be set to `/opendr/dataset_image`. +### 2D Object Tracking FairMOT ROS Node + +A ROS node for performing Object Tracking 2D using FairMOT with either pretrained models on MOT dataset, or custom trained models. +The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). +Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) +Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: +```shell +rosrun perception object_tracking_2d_fair_mot.py +``` +To get images from usb_camera, you can start the camera node as: +```shell +rosrun usb_cam usb_cam_node +``` +The corresponding `input_image_topic` should be `/usb_cam/image_raw`. +If you want to use a dataset from the disk, you can start a `image_dataset.py` node as: +```shell +rosrun perception image_dataset.py +``` +This will pulbish the dataset images to an `/opendr/dataset_image` topic by default, which means that the `input_image_topic` should be set to `/opendr/dataset_image`. + ### Panoptic Segmentation ROS Node You can find the panoptic segmentation ROS node python script [here](./scripts/panoptic_segmentation_efficient_ps.py) to inspect the code and modify it as you wish to fit your needs. @@ -591,26 +611,6 @@ rosrun perception point_cloud_dataset.py ``` This will pulbish the dataset point clouds to a `/opendr/dataset_point_cloud` topic by default, which means that the `input_point_cloud_topic` should be set to `/opendr/dataset_point_cloud`. -### 2D Object Tracking FairMOT ROS Node - -A ROS node for performing Object Tracking 2D using FairMOT with either pretrained models on MOT dataset, or custom trained models. -The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). -Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) -Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: -```shell -rosrun perception object_tracking_2d_fair_mot.py -``` -To get images from usb_camera, you can start the camera node as: -```shell -rosrun usb_cam usb_cam_node -``` -The corresponding `input_image_topic` should be `/usb_cam/image_raw`. -If you want to use a dataset from the disk, you can start a `image_dataset.py` node as: -```shell -rosrun perception image_dataset.py -``` -This will pulbish the dataset images to an `/opendr/dataset_image` topic by default, which means that the `input_image_topic` should be set to `/opendr/dataset_image`. - ---- ## Biosignal input From 441e1e0506af23549a37cf1613dffb43b4603a3b Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Tue, 29 Nov 2022 14:27:03 +0200 Subject: [PATCH 36/57] Completed ROS1 docs --- projects/opendr_ws/README.md | 5 +- projects/opendr_ws/src/perception/README.md | 327 ++++++++++++-------- 2 files changed, 204 insertions(+), 128 deletions(-) diff --git a/projects/opendr_ws/README.md b/projects/opendr_ws/README.md index b5d9449f2d..5f1a884311 100755 --- a/projects/opendr_ws/README.md +++ b/projects/opendr_ws/README.md @@ -73,15 +73,14 @@ Currently, apart from tools, opendr_ws contains the following ROS nodes (categor 3. [Face Detection](src/perception/README.md#face-detection-ros-node) 4. [Face Recognition](src/perception/README.md#face-recognition-ros-node) 5. [2D Object Detection](src/perception/README.md#2d-object-detection-ros-nodes) -6. [2D Object Tracking - Deep Sort](src/perception/README.md#2d-object-tracking-deep-sort-ros-node) -7. [2D Object Tracking FairMOT](src/perception/README.md#2d-object-tracking-fairmot-ros-node) +6. [2D Object Tracking](src/perception/README.md#2d-object-tracking-ros-nodes) 8. [Panoptic Segmentation](src/perception/README.md#panoptic-segmentation-ros-node) 9. [Semantic Segmentation](src/perception/README.md#semantic-segmentation-ros-node) 10. [Landmark-based Facial Expression Recognition](src/perception/README.md#landmark-based-facial-expression-recognition-ros-node) 11. [Skeleton-based Human Action Recognition](src/perception/README.md#skeleton-based-human-action-recognition-ros-node) 12. [Video Human Activity Recognition](src/perception/README.md#video-human-activity-recognition-ros-node) ## RGB + Infrared input -1. [End-to-End Multi-Modal Object Detection (GEM)](src/perception/README.md#gem-ros-node) +1. [End-to-End Multi-Modal Object Detection (GEM)](src/perception/README.md#2d-object-detection-gem-ros-node) ## RGBD input 1. [RGBD Hand Gesture Recognition](src/perception/README.md#rgbd-hand-gesture-recognition-ros-node) ## RGB + Audio input diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md index d2e54939e0..8083685cab 100644 --- a/projects/opendr_ws/src/perception/README.md +++ b/projects/opendr_ws/src/perception/README.md @@ -65,7 +65,7 @@ The node publishes the detected poses in [OpenDR's 2D pose message format](../ro rosrun perception pose_estimation.py ``` The following optional arguments are available: - - `-h, --help`: show a help message and exit + - `-h or --help`: show a help message and exit - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`) - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`) @@ -96,7 +96,7 @@ Fall detection uses the toolkit's pose estimation tool internally. rosrun perception fall_detection.py ``` The following optional arguments are available: - - `-h, --help`: show a help message and exit + - `-h or --help`: show a help message and exit - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_fallen_annotated`) - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/fallen`) @@ -126,7 +126,7 @@ The node makes use of the toolkit's [face detection tool](../../../../src/opendr rosrun perception face_detection_retinaface.py ``` The following optional arguments are available: - - `-h, --help`: show a help message and exit + - `-h or --help`: show a help message and exit - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_faces_annotated`) - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/faces`) @@ -154,7 +154,7 @@ The node makes use of the toolkit's [face recognition tool](../../../../src/open rosrun perception face_recognition.py ``` The following optional arguments are available: - - `-h, --help`: show a help message and exit + - `-h or --help`: show a help message and exit - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_face_reco_annotated`) - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/face_recognition`) @@ -229,7 +229,7 @@ The nodes makes use of the toolkit's various 2D object detection tools: [SSD too ``` The following optional arguments are available for all nodes above: - - `-h, --help`: show a help message and exit + - `-h or --help`: show a help message and exit - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_objects_annotated`) - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects`) @@ -241,49 +241,55 @@ The nodes makes use of the toolkit's various 2D object detection tools: [SSD too For viewing the output, refer to the [notes above.](#notes) -### 2D Object Tracking Deep Sort ROS Node - -A ROS node for performing Object Tracking 2D using Deep Sort using either pretrained models on Market1501 dataset, or custom trained models. -This is a detection-based method, and therefore the 2D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids. -The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). -Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) -Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: +### 2D Object Tracking ROS Nodes -```shell -rosrun perception object_tracking_2d_deep_sort.py -``` -To get images from usb_camera, you can start the camera node as: -```shell -rosrun usb_cam usb_cam_node -``` +For 2D object tracking, there two ROS nodes provided, one using Deep Sort and one using FairMOT which use either pretrained models, or custom trained models. +The predicted tracking annotations are split into two topics with detections and tracking IDs. Additionally, an annotated image is generated. -The corresponding `input_image_topic` should be `/usb_cam/image_raw`. -If you want to use a dataset from the disk, you can start an `image_dataset.py` node as: +You can find the 2D object detection ROS node python scripts here: [Deep Sort node](./scripts/object_tracking_2d_deep_sort.py) and [FairMOT node](./scripts/object_tracking_2d_fair_mot.py) +where you can inspect the code and modify it as you wish to fit your needs. +The nodes makes use of the toolkit's [object tracking 2D - Deep Sort tool](../../../../src/opendr/perception/object_tracking_2d/deep_sort/object_tracking_2d_deep_sort_learner.py) +and [object tracking 2D - FairMOT tool](../../../../src/opendr/perception/object_tracking_2d/fair_mot/object_tracking_2d_fair_mot_learner.py) +whose documentation can be found here: [Deep Sort docs](../../../../docs/reference/object-tracking-2d-deep-sort.md), [FairMOT docs](../../../../docs/reference/object-tracking-2d-fair-mot.md). -```shell -rosrun perception image_dataset.py -``` -This will pulbish the dataset images to an `/opendr/dataset_image` topic by default, which means that the `input_image_topic` should be set to `/opendr/dataset_image`. - -### 2D Object Tracking FairMOT ROS Node - -A ROS node for performing Object Tracking 2D using FairMOT with either pretrained models on MOT dataset, or custom trained models. -The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). -Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`) -Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: -```shell -rosrun perception object_tracking_2d_fair_mot.py -``` -To get images from usb_camera, you can start the camera node as: -```shell -rosrun usb_cam usb_cam_node -``` -The corresponding `input_image_topic` should be `/usb_cam/image_raw`. -If you want to use a dataset from the disk, you can start a `image_dataset.py` node as: -```shell -rosrun perception image_dataset.py -``` -This will pulbish the dataset images to an `/opendr/dataset_image` topic by default, which means that the `input_image_topic` should be set to `/opendr/dataset_image`. +#### Instructions for basic usage: + +1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). + +2. You are then ready to start a 2D object tracking node: + 1. Deep Sort node + ```shell + rosrun perception object_tracking_2d_deep_sort.py + ``` + The following optional argument is available for the Deep Sort node: + - `-n --model_name MODEL_NAME`: name of the trained model (default=`deep_sort`) + 2. FairMOT node + ```shell + rosrun perception object_tracking_2d_fair_mot.py + ``` + The following optional argument is available for the FairMOT node: + - `-n --model_name MODEL_NAME`: name of the trained model (default=`fairmot_dla34`) + + The following optional arguments are available for both nodes: + - `-h or --help`: show a help message and exit + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) + - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_objects_annotated`) + - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects`) + - `-t or --tracking_id_topic TRACKING_ID_TOPIC`: topic name for tracking ID messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects_tracking_id`) + - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `-td --temp_dir TEMP_DIR`: path to a temporary directory with models (default=`temp`) + +3. Default output topics: + - Output images: `/opendr/image_objects_annotated` + - Detection messages: `/opendr/objects` + - Tracking ID messages: `/opendr/objects_tracking_id` + + For viewing the output, refer to the [notes above.](#notes) + +**Notes** + +An [image dataset node](#image-dataset-ros-node) is also provided to be used along these nodes. +Make sure to change the default input topic of the tracking node if you are not using the USB cam node. ### Panoptic Segmentation ROS Node @@ -302,16 +308,16 @@ and additional information about Efficient PS [here](../../../../src/opendr/perc ``` The following optional arguments are available: - - `-h, --help`: show a help message and exit - - `--input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC` : listen to RGB images on this topic (default=`/usb_cam/image_raw`) - - `--checkpoint CHECKPOINT` : download pretrained models [cityscapes, kitti] or load from the provided path (default=`cityscapes`) - - `--output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: publish the semantic and instance maps on this topic as `OUTPUT_HEATMAP_TOPIC/semantic` and `OUTPUT_HEATMAP_TOPIC/instance`, `None` to stop the node from publishing on this topic (default=`/opendr/panoptic`) - - `--visualization_topic VISUALIZATION_TOPIC`: publish the panoptic segmentation map as an RGB image on `VISUALIZATION_TOPIC` or a more detailed overview if using the `--detailed_visualization` flag, `None` to stop the node from publishing on this topic (default=`/opendr/panoptic/rgb_visualization`) + - `-h or --help`: show a help message and exit + - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC` : listen to RGB images on this topic (default=`/usb_cam/image_raw`) + - `-oh --output_heatmap_topic OUTPUT_HEATMAP_TOPIC`: publish the semantic and instance maps on this topic as `OUTPUT_HEATMAP_TOPIC/semantic` and `OUTPUT_HEATMAP_TOPIC/instance`, `None` to stop the node from publishing on this topic (default=`/opendr/panoptic`) + - `-ov --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: publish the panoptic segmentation map as an RGB image on this topic or a more detailed overview if using the `--detailed_visualization` flag, `None` to stop the node from publishing on this topic (default=`opendr/panoptic/rgb_visualization`) - `--detailed_visualization`: generate a combined overview of the input RGB image and the semantic, instance, and panoptic segmentation maps and publish it on `OUTPUT_RGB_IMAGE_TOPIC` (default=deactivated) + - `--checkpoint CHECKPOINT` : download pretrained models [cityscapes, kitti] or load from the provided path (default=`cityscapes`) 3. Default output topics: - Output images: `/opendr/panoptic/semantic`, `/opendr/panoptic/instance`, `/opendr/panoptic/rgb_visualization` - - Detection messages: `/opendr/panoptic/semantic`, `/opendr/panoptic/instance`, `/opendr/panoptic/rgb_visualization` + - Detection messages: `/opendr/panoptic/semantic`, `/opendr/panoptic/instance` For viewing the output, refer to the [notes above.](#notes) @@ -330,10 +336,10 @@ The node makes use of the toolkit's [semantic segmentation tool](../../../../src rosrun perception semantic_segmentation_bisenet.py ``` The following optional arguments are available: - - `-h, --help`: show a help message and exit + - `-h or --help`: show a help message and exit - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_heatmap_topic OUTPUT_HEATMAP_TOPIC`: topic to which we are publishing the heatmap in the form of a ROS image containing class IDs, `None` to stop the node from publishing on this topic (default=`/opendr/heatmap`) - - `-v or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic to which we are publishing the heatmap image blended with the input image and a class legend for visualization purposes, `None` to stop the node from publishing on this topic (default=`/opendr/heatmap_visualization`) + - `-ov or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic to which we are publishing the heatmap image blended with the input image and a class legend for visualization purposes, `None` to stop the node from publishing on this topic (default=`/opendr/heatmap_visualization`) - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) 3. Default output topics: @@ -368,13 +374,13 @@ whose documentation can be found [here](../../../../docs/reference/landmark-base rosrun perception landmark_based_facial_expression_recognition.py ``` The following optional arguments are available: - - `-h, --help`: show a help message and exit + - `-h or --help`: show a help message and exit - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic to which we are publishing the recognized facial expression category info, `None` to stop the node from publishing on this topic (default=`"/opendr/landmark_expression_recognition"`) - - `-d or --output_category_description_topic OUTPUT_CATEGORY_DESRIPTION_TOPIC`: topic to which we are publishing the description of the recognized facial expression, `None` to stop the node from publishing on this topic (default=`/opendr/landmark_expression_recognition_description`) - - `--model`: architecture to use for facial expression recognition, options are `pstbln_ck+`, `pstbln_casia`, `pstbln_afew` (default=`pstbln_afew`) - - `-s --shape_predictor SHAPE_PREDICTOR`: shape predictor (landmark_extractor) to use (default=`./predictor_path`) + - `-d or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC`: topic to which we are publishing the description of the recognized facial expression, `None` to stop the node from publishing on this topic (default=`/opendr/landmark_expression_recognition_description`) - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `--model`: architecture to use for facial expression recognition, options are `pstbln_ck+`, `pstbln_casia`, `pstbln_afew` (default=`pstbln_afew`) + - `-s or --shape_predictor SHAPE_PREDICTOR`: shape predictor (landmark_extractor) to use (default=`./predictor_path`) 3. Default output topics: - Detection messages: `/opendr/landmark_expression_recognition`, `/opendr/landmark_expression_recognition_description` @@ -401,12 +407,12 @@ whose documentation can be found [here](../../../../docs/reference/skeleton-base rosrun perception skeleton_based_action_recognition.py ``` The following optional arguments are available: - - `-h, --help`: show a help message and exit + - `-h or --help`: show a help message and exit - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - - `-c or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic name for recognized action category, `None` to stop the node from publishing on this topic (default=`"/opendr/skeleton_recognized_action"`) - - `-d or --output_category_description_topic OUTPUT_CATEGORY_DESRIPTION_TOPIC`: topic name for description of the recognized action category, `None` to stop the node from publishing on this topic (default=`/opendr/skeleton_recognized_action_description`) - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output pose-annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`) - `-p or --pose_annotations_topic POSE_ANNOTATIONS_TOPIC`: topic name for pose annotations, `None` to stop the node from publishing on this topic (default=`/opendr/poses`) + - `-c or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic name for recognized action category, `None` to stop the node from publishing on this topic (default=`"/opendr/skeleton_recognized_action"`) + - `-d or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC`: topic name for description of the recognized action category, `None` to stop the node from publishing on this topic (default=`/opendr/skeleton_recognized_action_description`) - `--model`: model to use, options are `stgcn` or `pstgcn`, (default=`stgcn`) - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) @@ -434,7 +440,7 @@ The node makes use of the toolkit's video human activity recognition tools which rosrun perception video_activity_recognition.py ``` The following optional arguments are available: - - `-h, --help`: show a help message and exit + - `-h or --help`: show a help message and exit - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`) - `-o or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic to which we are publishing the recognized activity, `None` to stop the node from publishing on this topic (default=`"/opendr/human_activity_recognition"`) - `-od or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC`: topic to which we are publishing the ID of the recognized action, `None` to stop the node from publishing on this topic (default=`/opendr/human_activity_recognition_description`) @@ -452,33 +458,45 @@ You can find the corresponding IDs regarding activity recognition [here](https:/ ## RGB + Infrared input -### GEM ROS Node - -Assuming that you have already [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can +### 2D Object Detection GEM ROS Node +You can find the object detection 2D GEM ROS node python script [here](./scripts/object_detection_2d_gem.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's [object detection 2D GEM tool](../../../../src/opendr/perception/object_detection_2d/gem/gem_learner.py) +whose documentation can be found [here](../../../../docs/reference/gem.md). -1. Add OpenDR to `PYTHONPATH` (please make sure you do not overwrite `PYTHONPATH` ), e.g., -```shell -export PYTHONPATH="/home/user/opendr/src:$PYTHONPATH" -``` -2. First one needs to find points in the color and infrared images that correspond, in order to find the homography matrix that allows to correct for the difference in perspective between the infrared and the RGB camera. -These points can be selected using a [utility tool](../../../../src/opendr/perception/object_detection_2d/utils/get_color_infra_alignment.py) that is provided in the toolkit. +#### Instructions for basic usage: -3. Pass the points you have found as *pts_color* and *pts_infra* arguments to the ROS gem.py node. +1. First one needs to find points in the color and infrared images that correspond, in order to find the homography matrix that allows to correct for the difference in perspective between the infrared and the RGB camera. + These points can be selected using a [utility tool](../../../../src/opendr/perception/object_detection_2d/utils/get_color_infra_alignment.py) that is provided in the toolkit. -4. Start the node responsible for publishing images. If you have a RealSense camera, then you can use the corresponding node (assuming you have installed [realsense2_camera](http://wiki.ros.org/realsense2_camera)): +2. Pass the points you have found as *pts_color* and *pts_infra* arguments to the [ROS GEM node](./scripts/object_detection_2d_gem.py). -```shell -roslaunch realsense2_camera rs_camera.launch enable_color:=true enable_infra:=true enable_depth:=false enable_sync:=true infra_width:=640 infra_height:=480 -``` - -4. You are then ready to start the pose detection node +3. Start the node responsible for publishing images. If you have a RealSense camera, then you can use the corresponding node (assuming you have installed [realsense2_camera](http://wiki.ros.org/realsense2_camera)): + + ```shell + roslaunch realsense2_camera rs_camera.launch enable_color:=true enable_infra:=true enable_depth:=false enable_sync:=true infra_width:=640 infra_height:=480 + ``` + +4. You are then ready to start the object detection 2d GEM node: -```shell -rosrun perception object_detection_2d_gem.py -``` + ```shell + rosrun perception object_detection_2d_gem.py + ``` + The following optional arguments are available: + - `-h or --help`: show a help message and exit + - `-ic or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/camera/color/image_raw`) + - `-ii or --input_infra_image_topic INPUT_INFRA_IMAGE_TOPIC`: topic name for input infrared image (default=`/camera/infra/image_raw`) + - `-oc or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/rgb_image_objects_annotated`) + - `-oi or --output_infra_image_topic OUTPUT_INFRA_IMAGE_TOPIC`: topic name for output annotated infrared image, `None` to stop the node from publishing on this topic (default=`/opendr/infra_image_objects_annotated`) + - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects`) + - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) -5. You can examine the annotated image stream using `rqt_image_view` (select one of the topics `/opendr/color_detection_annotated` or `/opendr/infra_detection_annotated`) or `rostopic echo /opendr/detections` +5. Default output topics: + - Output RGB images: `/opendr/rgb_image_objects_annotated` + - Output infrared images: `/opendr/infra_image_objects_annotated` + - Detection messages: `/opendr/objects` + + For viewing the output, refer to the [notes above.](#notes) ---- ## RGBD input @@ -499,10 +517,10 @@ The node makes use of the toolkit's [hand gesture recognition tool](../../../../ rosrun perception rgbd_hand_gesture_recognition.py ``` The following optional arguments are available: - - `-h, --help`: show a help message and exit - - `--input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/kinect2/qhd/image_color_rect`) - - `--input_depth_image_topic INPUT_DEPTH_IMAGE_TOPIC`: topic name for input depth image (default=`/kinect2/qhd/image_depth_rect`) - - `--output_gestures_topic OUTPUT_GESTURES_TOPIC`: topic name for predicted gesture class (default=`/opendr/gestures`) + - `-h or --help`: show a help message and exit + - `-ic or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/kinect2/qhd/image_color_rect`) + - `-id or --input_depth_image_topic INPUT_DEPTH_IMAGE_TOPIC`: topic name for input depth image (default=`/kinect2/qhd/image_depth_rect`) + - `-o or --output_gestures_topic OUTPUT_GESTURES_TOPIC`: topic name for predicted gesture class (default=`/opendr/gestures`) - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) 3. Default output topics: @@ -528,10 +546,10 @@ The node makes use of the toolkit's [audiovisual emotion recognition tool](../.. rosrun perception speech_command_recognition.py ``` The following optional arguments are available: - - `-h, --help`: show a help message and exit - - `--input_video_topic INPUT_VIDEO_TOPIC`: topic name for input video, expects detected face of size 224x224 (default=`/usb_cam/image_raw`) - - `--input_audio_topic INPUT_AUDIO_TOPIC`: topic name for input audio (default=`/audio/audio`) - - `--output_emotions_topic OUTPUT_EMOTIONS_TOPIC`: topic to which we are publishing the predicted emotion (default=`/opendr/audiovisual_emotion`) + - `-h or --help`: show a help message and exit + - `-iv or --input_video_topic INPUT_VIDEO_TOPIC`: topic name for input video, expects detected face of size 224x224 (default=`/usb_cam/image_raw`) + - `-ia or --input_audio_topic INPUT_AUDIO_TOPIC`: topic name for input audio (default=`/audio/audio`) + - `-o or --output_emotions_topic OUTPUT_EMOTIONS_TOPIC`: topic to which we are publishing the predicted emotion (default=`/opendr/audiovisual_emotion`) - `--buffer_size BUFFER_SIZE`: length of audio and video in seconds, (default=`3.6`) - `--model_path MODEL_PATH`: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server @@ -561,9 +579,9 @@ The node makes use of the toolkit's speech command recognition tools: [EdgeSpeec rosrun perception speech_command_recognition.py ``` The following optional arguments are available: - - `-h, --help`: show a help message and exit - - `--input_audio_topic INPUT_AUDIO_TOPIC`: topic name for input audio (default=`/audio/audio`) - - `--output_speech_command_topic OUTPUT_SPEECH_COMMAND_TOPIC`: topic name for speech command output (default=`/opendr/speech_recognition`) + - `-h or --help`: show a help message and exit + - `-i or --input_audio_topic INPUT_AUDIO_TOPIC`: topic name for input audio (default=`/audio/audio`) + - `-o or --output_speech_command_topic OUTPUT_SPEECH_COMMAND_TOPIC`: topic name for speech command output (default=`/opendr/speech_recognition`) - `--buffer_size BUFFER_SIZE`: set the size of the audio buffer (expected command duration) in seconds (default=`1.5`) - `--model MODEL`: the model to use, choices are `matchboxnet`, `edgespeechnets` or `quad_selfonn` (default=`matchboxnet`) - `--model_path MODEL_PATH`: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server @@ -581,35 +599,68 @@ EdgeSpeechNets currently does not have a pretrained model available for download ## Point cloud input ### 3D Object Detection Voxel ROS Node - -A ROS node for performing Object Detection 3D using PointPillars or TANet methods with either pretrained models on KITTI dataset, or custom trained models. -The predicted detection annotations are pushed to `output_detection3d_topic` (default `output_detection3d_topic="/opendr/detection3d"`). -Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: -```shell -rosrun perception object_detection_3d_voxel.py -``` -To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as: -```shell -rosrun perception point_cloud_dataset.py -``` -This will pulbish the dataset point clouds to a `/opendr/dataset_point_cloud` topic by default, which means that the `input_point_cloud_topic` should be set to `/opendr/dataset_point_cloud`. +A ROS node for performing 3D object detection Voxel using PointPillars or TANet methods with either pretrained models on KITTI dataset, or custom trained models. + +You can find the 3D object detection Voxel ROS node python script [here](./scripts/object_detection_3d_voxel.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's [3D object detection Voxel tool](../../../../src/opendr/perception/object_detection_3d/voxel_object_detection_3d/voxel_object_detection_3d_learner.py) +whose documentation can be found [here](../../../../docs/reference/voxel-object-detection-3d.md). + +#### Instructions for basic usage: + +1. Start the node responsible for publishing point clouds. OpenDR provides a [point cloud dataset node](#point-cloud-dataset-ros-node) for convenience. + +2. You are then ready to start the 3D object detection node: + + ```shell + rosrun perception object_detection_3d_voxel.py + ``` + The following optional arguments are available: + - `-h or --help`: show a help message and exit + - `-i or --input_point_cloud_topic INPUT_POINT_CLOUD_TOPIC`: point cloud topic provided by either a point_cloud_dataset_node or any other 3D point cloud node (default=`/opendr/dataset_point_cloud`) + - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages (default=`/opendr/objects3d`) + - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `-n or --model_name MODEL_NAME`: name of the trained model (default=`tanet_car_xyres_16`) + - `-c or --model_config_path MODEL_CONFIG_PATH`: path to a model .proto config (default=`../../src/opendr/perception/object_detection3d/voxel_object_detection_3d/second_detector/configs/tanet/car/xyres_16.proto`) + +3. Default output topics: + - Detection messages: `/opendr/objects3d` + + For viewing the output, refer to the [notes above.](#notes) ### 3D Object Tracking AB3DMOT ROS Node - -A ROS node for performing Object Tracking 3D using AB3DMOT stateless method. + +A ROS node for performing 3D object tracking using AB3DMOT stateless method. This is a detection-based method, and therefore the 3D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids. -The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection3d"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking3d_id"`). +The predicted tracking annotations are split into two topics with detections and tracking IDs. -Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as: -```shell -rosrun perception object_tracking_3d_ab3dmot.py -``` -To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as: -```shell -rosrun perception point_cloud_dataset.py -``` -This will pulbish the dataset point clouds to a `/opendr/dataset_point_cloud` topic by default, which means that the `input_point_cloud_topic` should be set to `/opendr/dataset_point_cloud`. +You can find the 3D object tracking AB3DMOT ROS node python script [here](./scripts/object_tracking_3d_ab3dmot.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's [3D object tracking AB3DMOT tool](../../../../src/opendr/perception/object_tracking_3d/ab3dmot/object_tracking_3d_ab3dmot_learner.py) +whose documentation can be found [here](../../../../docs/reference/object-tracking-3d-ab3dmot.md). + +#### Instructions for basic usage: + +1. Start the node responsible for publishing point clouds. OpenDR provides a [point cloud dataset node](#point-cloud-dataset-ros-node) for convenience. + +2. You are then ready to start the 3D object tracking node: + + ```shell + rosrun perception object_tracking_3d_ab3dmot.py + ``` + The following optional arguments are available: + - `-h or --help`: show a help message and exit + - `-i or --input_point_cloud_topic INPUT_POINT_CLOUD_TOPIC`: point cloud topic provided by either a point_cloud_dataset_node or any other 3D point cloud node (default=`/opendr/dataset_point_cloud`) + - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects3d`) + - `-t or --tracking3d_id_topic TRACKING3D_ID_TOPIC`: topic name for output tracking IDs with the same element count as in detection topic, `None` to stop the node from publishing on this topic (default=`/opendr/objects_tracking_id`) + - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `-dn or --detector_model_name DETECTOR_MODEL_NAME`: name of the trained model (default=`tanet_car_xyres_16`) + - `-dc or --detector_model_config_path DETECTOR_MODEL_CONFIG_PATH`: path to a model .proto config (default=`../../src/opendr/perception/object_detection3d/voxel_object_detection_3d/second_detector/configs/tanet/car/xyres_16.proto`) + +3. Default output topics: + - Detection messages: `/opendr/objects3d` + - Tracking ID messages: `/opendr/objects_tracking_id` + + For viewing the output, refer to the [notes above.](#notes) ---- ## Biosignal input @@ -633,11 +684,11 @@ The node makes use of the toolkit's heart anomaly detection tools: [ANBOF tool]( rosrun perception heart_anomaly_detection.py ``` The following optional arguments are available: - - `-h, --help`: show a help message and exit - - `--input_ecg_topic INPUT_ECG_TOPIC`: topic name for input ECG data (default=`/ecg/ecg`) - - `--output_heart_anomaly_topic OUTPUT_HEART_ANOMALY_TOPIC`: topic name for heart anomaly detection (default=`/opendr/heart_anomaly`) - - `--model MODEL`: the model to use, choices are `anbof` or `gru` (default=`anbof`) + - `-h or --help`: show a help message and exit + - `-i or --input_ecg_topic INPUT_ECG_TOPIC`: topic name for input ECG data (default=`/ecg/ecg`) + - `-o or --output_heart_anomaly_topic OUTPUT_HEART_ANOMALY_TOPIC`: topic name for heart anomaly detection (default=`/opendr/heart_anomaly`) - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`) + - `--model MODEL`: the model to use, choices are `anbof` or `gru` (default=`anbof`) 3. Default output topics: - Detection messages: `/opendr/heart_anomaly` @@ -649,22 +700,48 @@ The node makes use of the toolkit's heart anomaly detection tools: [ANBOF tool]( The dataset nodes can be used to publish data from the disk, which is useful to test the functionality without the use of a sensor. Dataset nodes use a provided `DatasetIterator` object that returns a `(Data, Target)` pair. -If the type of the `Data` object is correct, the node will transform it into a corresponding ROS message object and publish it to a desired topic. +If the type of the `Data` object is correct, the node will transform it into a corresponding ROS message object and publish it to a desired topic. +The OpenDR toolkit currently provides two such nodes, an image dataset node and a point cloud dataset node. ### Image Dataset ROS Node + +The image dataset node downloads a `nano_MOT20` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic, +which is intended to be used with the [2D object tracking nodes](#2d-object-tracking-ros-nodes). + +You can create an instance of this node with any `DatasetIterator` object that returns `(Image, Target)` as elements, +to use alongside other nodes and datasets. + To get an image from a dataset on the disk, you can start a `image_dataset.py` node as: ```shell rosrun perception image_dataset.py ``` -By default, it downloads a `nano_MOT20` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. -You can create an instance of this node with any `DatasetIterator` object that returns `(Image, Target)` as elements. +The following optional arguments are available: + - `-h or --help`: show a help message and exit + - `-o or --output_rgb_image_topic`: topic name to publish the data (default=`/opendr/dataset_image`) + - `-f or --fps FPS`: data fps (default=`10`) + - `-d or --dataset_path DATASET_PATH`: path to a dataset (default=`/MOT`) + - `-ks or --mot20_subsets_path MOT20_SUBSETS_PATH`: path to MOT20 subsets (default=`../../src/opendr/perception/object_tracking_2d/datasets/splits/nano_mot20.train`) + You can inspect [the node](./scripts/image_dataset.py) and modify it to your needs for other image datasets. ### Point Cloud Dataset ROS Node + +The point cloud dataset node downloads a `nano_KITTI` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic, +which is intended to be used with the [3D object detection node](#3d-object-detection-voxel-ros-node), +as well as the [3D object tracking node](#3d-object-tracking-ab3dmot-ros-node). + +You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements, +to use alongside other nodes and datasets. + To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as: ```shell rosrun perception point_cloud_dataset.py ``` -By default, it downloads a `nano_KITTI` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. -You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements. +The following optional arguments are available: + - `-h or --help`: show a help message and exit + - `-o or --output_point_cloud_topic`: topic name to publish the data (default=`/opendr/dataset_point_cloud`) + - `-f or --fps FPS`: data fps (default=`10`) + - `-d or --dataset_path DATASET_PATH`: path to a dataset, if it does not exist, nano KITTI dataset will be downloaded there (default=`/KITTI/opendr_nano_kitti`) + - `-ks or --kitti_subsets_path KITTI_SUBSETS_PATH`: path to KITTI subsets, used only if a KITTI dataset is downloaded (default=`../../src/opendr/perception/object_detection_3d/datasets/nano_kitti_subsets`) + You can inspect [the node](./scripts/point_cloud_dataset.py) and modify it to your needs for other point cloud datasets. From 5cf5f3ef2dd95b631b367acc3425b5e46d094629 Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 30 Nov 2022 13:26:46 +0200 Subject: [PATCH 37/57] Updates on default values for FairMOT ros node class ctor --- .../src/perception/scripts/object_tracking_2d_fair_mot.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/projects/opendr_ws/src/perception/scripts/object_tracking_2d_fair_mot.py b/projects/opendr_ws/src/perception/scripts/object_tracking_2d_fair_mot.py index ebc2fe92e6..67c2c75812 100755 --- a/projects/opendr_ws/src/perception/scripts/object_tracking_2d_fair_mot.py +++ b/projects/opendr_ws/src/perception/scripts/object_tracking_2d_fair_mot.py @@ -33,9 +33,9 @@ class ObjectTracking2DFairMotNode: def __init__( self, input_rgb_image_topic="/usb_cam/image_raw", - output_detection_topic="/opendr/fairmot_detection", - output_tracking_id_topic="/opendr/fairmot_tracking_id", - output_rgb_image_topic="/opendr/fairmot_image_annotated", + output_detection_topic="/opendr/objects", + output_tracking_id_topic="/opendr/objects_tracking_id", + output_rgb_image_topic="/opendr/image_objects_annotated", device="cuda:0", model_name="fairmot_dla34", temp_dir="temp", From 6f48675a7e5682f628a005c768498473756d0aef Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 30 Nov 2022 13:27:25 +0200 Subject: [PATCH 38/57] Fixed duplicate shortcut on deepsort ros node argparse --- .../src/perception/scripts/object_tracking_2d_deep_sort.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/projects/opendr_ws/src/perception/scripts/object_tracking_2d_deep_sort.py b/projects/opendr_ws/src/perception/scripts/object_tracking_2d_deep_sort.py index fb85d6eb9c..45eda5b28e 100755 --- a/projects/opendr_ws/src/perception/scripts/object_tracking_2d_deep_sort.py +++ b/projects/opendr_ws/src/perception/scripts/object_tracking_2d_deep_sort.py @@ -199,7 +199,7 @@ def main(): type=str, default="cuda", choices=["cuda", "cpu"]) parser.add_argument("-n", "--model_name", help="Name of the trained model", type=str, default="deep_sort", choices=["deep_sort"]) - parser.add_argument("-t", "--temp_dir", help="Path to a temporary directory with models", + parser.add_argument("-td", "--temp_dir", help="Path to a temporary directory with models", type=str, default="temp") args = parser.parse_args() From f4a487080dc41d2dd3fe4c75e75a828eb29d9efb Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 30 Nov 2022 13:28:23 +0200 Subject: [PATCH 39/57] Fixed missing shortcut on rgbd hand gesture reco ros node argparse --- .../src/perception/scripts/rgbd_hand_gesture_recognition.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/projects/opendr_ws/src/perception/scripts/rgbd_hand_gesture_recognition.py b/projects/opendr_ws/src/perception/scripts/rgbd_hand_gesture_recognition.py index 09133bccc2..098e297a18 100755 --- a/projects/opendr_ws/src/perception/scripts/rgbd_hand_gesture_recognition.py +++ b/projects/opendr_ws/src/perception/scripts/rgbd_hand_gesture_recognition.py @@ -133,11 +133,11 @@ def preprocess(self, rgb_image, depth_image): # default topics are according to kinectv2 drivers at https://github.com/OpenKinect/libfreenect2 # and https://github.com/code-iai-iai_kinect2 parser = argparse.ArgumentParser() - parser.add_argument("--input_rgb_image_topic", help="Topic name for input rgb image", + parser.add_argument("-ic", "--input_rgb_image_topic", help="Topic name for input rgb image", type=str, default="/kinect2/qhd/image_color_rect") - parser.add_argument("--input_depth_image_topic", help="Topic name for input depth image", + parser.add_argument("-id", "--input_depth_image_topic", help="Topic name for input depth image", type=str, default="/kinect2/qhd/image_depth_rect") - parser.add_argument("--output_gestures_topic", help="Topic name for predicted gesture class", + parser.add_argument("-o", "--output_gestures_topic", help="Topic name for predicted gesture class", type=str, default="/opendr/gestures") parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda", choices=["cuda", "cpu"]) From 01bacacf93601afff38925928fce845a0962af17 Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 30 Nov 2022 13:47:52 +0200 Subject: [PATCH 40/57] Added "opendr_" to data gen package and "_node" to the node file name --- .../CMakeLists.txt | 4 +- .../README.md | 56 +++++++++---------- .../package.xml | 2 +- .../synthetic_facial_generation_node.py} | 0 4 files changed, 31 insertions(+), 31 deletions(-) rename projects/opendr_ws/src/{data_generation => opendr_data_generation}/CMakeLists.txt (85%) rename projects/opendr_ws/src/{data_generation => opendr_data_generation}/README.md (97%) rename projects/opendr_ws/src/{data_generation => opendr_data_generation}/package.xml (95%) rename projects/opendr_ws/src/{data_generation/scripts/synthetic_facial_generation.py => opendr_data_generation/scripts/synthetic_facial_generation_node.py} (100%) diff --git a/projects/opendr_ws/src/data_generation/CMakeLists.txt b/projects/opendr_ws/src/opendr_data_generation/CMakeLists.txt similarity index 85% rename from projects/opendr_ws/src/data_generation/CMakeLists.txt rename to projects/opendr_ws/src/opendr_data_generation/CMakeLists.txt index 2a43cfdb27..ed273ea805 100644 --- a/projects/opendr_ws/src/data_generation/CMakeLists.txt +++ b/projects/opendr_ws/src/opendr_data_generation/CMakeLists.txt @@ -1,5 +1,5 @@ cmake_minimum_required(VERSION 3.0.2) -project(data_generation) +project(opendr_data_generation) find_package(catkin REQUIRED COMPONENTS roscpp @@ -27,6 +27,6 @@ include_directories( ############# catkin_install_python(PROGRAMS - scripts/synthetic_facial_generation.py + scripts/synthetic_facial_generation_node.py DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) diff --git a/projects/opendr_ws/src/data_generation/README.md b/projects/opendr_ws/src/opendr_data_generation/README.md similarity index 97% rename from projects/opendr_ws/src/data_generation/README.md rename to projects/opendr_ws/src/opendr_data_generation/README.md index 523347f6a0..67390f9918 100644 --- a/projects/opendr_ws/src/data_generation/README.md +++ b/projects/opendr_ws/src/opendr_data_generation/README.md @@ -1,28 +1,28 @@ -# Perception Package - -This package contains ROS nodes related to data generation package of OpenDR. - -## Pose Estimation ROS Node -Assuming that you have already [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can - - -1. Add OpenDR to `PYTHONPATH` (please make sure you do not overwrite `PYTHONPATH` ), e.g., -```shell -export PYTHONPATH="/home/user/opendr/src:$PYTHONPATH" -``` - -2. Start the node responsible for publishing images. If you have a usb camera, then you can use the corresponding node (assuming you have installed the corresponding package): - -```shell -rosrun usb_cam usb_cam_node -``` - -3. You are then ready to start the synthetic data generation node - -```shell -rosrun data_generation synthetic_facial_generation.py -``` - -3. You can examine the published multiview facial images stream using `rosrun rqt_image_view rqt_image_view` (select the topic `/opendr/synthetic_facial_images`) or `rostopic echo /opendr/synthetic_facial_images` - - +# Perception Package + +This package contains ROS nodes related to data generation package of OpenDR. + +## Pose Estimation ROS Node +Assuming that you have already [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can + + +1. Add OpenDR to `PYTHONPATH` (please make sure you do not overwrite `PYTHONPATH` ), e.g., +```shell +export PYTHONPATH="/home/user/opendr/src:$PYTHONPATH" +``` + +2. Start the node responsible for publishing images. If you have a usb camera, then you can use the corresponding node (assuming you have installed the corresponding package): + +```shell +rosrun usb_cam usb_cam_node +``` + +3. You are then ready to start the synthetic data generation node + +```shell +rosrun data_generation synthetic_facial_generation.py +``` + +3. You can examine the published multiview facial images stream using `rosrun rqt_image_view rqt_image_view` (select the topic `/opendr/synthetic_facial_images`) or `rostopic echo /opendr/synthetic_facial_images` + + diff --git a/projects/opendr_ws/src/data_generation/package.xml b/projects/opendr_ws/src/opendr_data_generation/package.xml similarity index 95% rename from projects/opendr_ws/src/data_generation/package.xml rename to projects/opendr_ws/src/opendr_data_generation/package.xml index 57d1e6e1f7..dd45ae4d06 100644 --- a/projects/opendr_ws/src/data_generation/package.xml +++ b/projects/opendr_ws/src/opendr_data_generation/package.xml @@ -1,6 +1,6 @@ - data_generation + opendr_data_generation 1.1.1 OpenDR's ROS nodes for data generation package OpenDR Project Coordinator diff --git a/projects/opendr_ws/src/data_generation/scripts/synthetic_facial_generation.py b/projects/opendr_ws/src/opendr_data_generation/scripts/synthetic_facial_generation_node.py similarity index 100% rename from projects/opendr_ws/src/data_generation/scripts/synthetic_facial_generation.py rename to projects/opendr_ws/src/opendr_data_generation/scripts/synthetic_facial_generation_node.py From 3a7a4aa5fe15b5a254aa116082d276d65768648b Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 30 Nov 2022 14:27:32 +0200 Subject: [PATCH 41/57] Renamed package to "opendr_perception" and added "_node" to scripts --- .../CMakeLists.txt | 16 ++++++++-------- .../{perception => opendr_perception}/README.md | 0 .../include/opendr_perception}/.keep | 0 .../package.xml | 2 +- .../audiovisual_emotion_recognition_node.py} | 0 .../scripts/face_detection_retinaface_node.py} | 0 .../scripts/face_recognition_node.py} | 0 .../scripts/fall_detection_node.py} | 0 .../scripts/heart_anomaly_detection_node.py} | 0 .../scripts/image_dataset_node.py} | 0 ..._based_facial_expression_recognition_node.py} | 0 .../object_detection_2d_centernet_node.py} | 0 .../scripts/object_detection_2d_detr_node.py} | 0 .../scripts/object_detection_2d_gem_node.py} | 0 .../scripts/object_detection_2d_nanodet_node.py} | 0 .../scripts/object_detection_2d_ssd_node.py} | 0 .../scripts/object_detection_2d_yolov3_node.py} | 0 .../scripts/object_detection_3d_voxel_node.py} | 0 .../object_tracking_2d_deep_sort_node.py} | 0 .../scripts/object_tracking_2d_fair_mot_node.py} | 0 .../scripts/object_tracking_3d_ab3dmot_node.py} | 0 .../panoptic_segmentation_efficient_ps_node.py} | 0 .../scripts/point_cloud_dataset_node.py} | 0 .../scripts/pose_estimation_node.py} | 0 .../rgbd_hand_gesture_recognition_node.py} | 0 .../semantic_segmentation_bisenet_node.py} | 0 .../skeleton_based_action_recognition_node.py} | 0 .../scripts/speech_command_recognition_node.py} | 0 .../scripts/video_activity_recognition_node.py} | 0 .../{perception => opendr_perception}/src/.keep | 0 30 files changed, 9 insertions(+), 9 deletions(-) rename projects/opendr_ws/src/{perception => opendr_perception}/CMakeLists.txt (59%) rename projects/opendr_ws/src/{perception => opendr_perception}/README.md (100%) rename projects/opendr_ws/src/{perception/include/perception => opendr_perception/include/opendr_perception}/.keep (100%) rename projects/opendr_ws/src/{perception => opendr_perception}/package.xml (96%) rename projects/opendr_ws/src/{perception/scripts/audiovisual_emotion_recognition.py => opendr_perception/scripts/audiovisual_emotion_recognition_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/face_detection_retinaface.py => opendr_perception/scripts/face_detection_retinaface_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/face_recognition.py => opendr_perception/scripts/face_recognition_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/fall_detection.py => opendr_perception/scripts/fall_detection_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/heart_anomaly_detection.py => opendr_perception/scripts/heart_anomaly_detection_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/image_dataset.py => opendr_perception/scripts/image_dataset_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/landmark_based_facial_expression_recognition.py => opendr_perception/scripts/landmark_based_facial_expression_recognition_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/object_detection_2d_centernet.py => opendr_perception/scripts/object_detection_2d_centernet_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/object_detection_2d_detr.py => opendr_perception/scripts/object_detection_2d_detr_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/object_detection_2d_gem.py => opendr_perception/scripts/object_detection_2d_gem_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/object_detection_2d_nanodet.py => opendr_perception/scripts/object_detection_2d_nanodet_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/object_detection_2d_ssd.py => opendr_perception/scripts/object_detection_2d_ssd_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/object_detection_2d_yolov3.py => opendr_perception/scripts/object_detection_2d_yolov3_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/object_detection_3d_voxel.py => opendr_perception/scripts/object_detection_3d_voxel_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/object_tracking_2d_deep_sort.py => opendr_perception/scripts/object_tracking_2d_deep_sort_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/object_tracking_2d_fair_mot.py => opendr_perception/scripts/object_tracking_2d_fair_mot_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/object_tracking_3d_ab3dmot.py => opendr_perception/scripts/object_tracking_3d_ab3dmot_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/panoptic_segmentation_efficient_ps.py => opendr_perception/scripts/panoptic_segmentation_efficient_ps_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/point_cloud_dataset.py => opendr_perception/scripts/point_cloud_dataset_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/pose_estimation.py => opendr_perception/scripts/pose_estimation_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/rgbd_hand_gesture_recognition.py => opendr_perception/scripts/rgbd_hand_gesture_recognition_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/semantic_segmentation_bisenet.py => opendr_perception/scripts/semantic_segmentation_bisenet_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/skeleton_based_action_recognition.py => opendr_perception/scripts/skeleton_based_action_recognition_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/speech_command_recognition.py => opendr_perception/scripts/speech_command_recognition_node.py} (100%) rename projects/opendr_ws/src/{perception/scripts/video_activity_recognition.py => opendr_perception/scripts/video_activity_recognition_node.py} (100%) rename projects/opendr_ws/src/{perception => opendr_perception}/src/.keep (100%) diff --git a/projects/opendr_ws/src/perception/CMakeLists.txt b/projects/opendr_ws/src/opendr_perception/CMakeLists.txt similarity index 59% rename from projects/opendr_ws/src/perception/CMakeLists.txt rename to projects/opendr_ws/src/opendr_perception/CMakeLists.txt index e401f7f17e..a83d022b81 100644 --- a/projects/opendr_ws/src/perception/CMakeLists.txt +++ b/projects/opendr_ws/src/opendr_perception/CMakeLists.txt @@ -1,5 +1,5 @@ cmake_minimum_required(VERSION 3.0.2) -project(perception) +project(opendr_perception) find_package(catkin REQUIRED COMPONENTS roscpp @@ -28,12 +28,12 @@ include_directories( ############# catkin_install_python(PROGRAMS - scripts/pose_estimation.py - scripts/fall_detection.py - scripts/object_detection_2d_nanodet.py - scripts/object_detection_2d_yolov5.py - scripts/object_detection_2d_detr.py - scripts/object_detection_2d_gem.py - scripts/semantic_segmentation_bisenet.py + scripts/pose_estimation_node.py + scripts/fall_detection_node.py + scripts/object_detection_2d_nanodet_node.py + scripts/object_detection_2d_yolov5_node.py + scripts/object_detection_2d_detr_node.py + scripts/object_detection_2d_gem_node.py + scripts/semantic_segmentation_bisenet_node.py DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/opendr_perception/README.md similarity index 100% rename from projects/opendr_ws/src/perception/README.md rename to projects/opendr_ws/src/opendr_perception/README.md diff --git a/projects/opendr_ws/src/perception/include/perception/.keep b/projects/opendr_ws/src/opendr_perception/include/opendr_perception/.keep similarity index 100% rename from projects/opendr_ws/src/perception/include/perception/.keep rename to projects/opendr_ws/src/opendr_perception/include/opendr_perception/.keep diff --git a/projects/opendr_ws/src/perception/package.xml b/projects/opendr_ws/src/opendr_perception/package.xml similarity index 96% rename from projects/opendr_ws/src/perception/package.xml rename to projects/opendr_ws/src/opendr_perception/package.xml index 7b7c0e00c9..fcbfca68a8 100644 --- a/projects/opendr_ws/src/perception/package.xml +++ b/projects/opendr_ws/src/opendr_perception/package.xml @@ -1,6 +1,6 @@ - perception + opendr_perception 1.1.1 OpenDR's ROS nodes for perception package OpenDR Project Coordinator diff --git a/projects/opendr_ws/src/perception/scripts/audiovisual_emotion_recognition.py b/projects/opendr_ws/src/opendr_perception/scripts/audiovisual_emotion_recognition_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/audiovisual_emotion_recognition.py rename to projects/opendr_ws/src/opendr_perception/scripts/audiovisual_emotion_recognition_node.py diff --git a/projects/opendr_ws/src/perception/scripts/face_detection_retinaface.py b/projects/opendr_ws/src/opendr_perception/scripts/face_detection_retinaface_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/face_detection_retinaface.py rename to projects/opendr_ws/src/opendr_perception/scripts/face_detection_retinaface_node.py diff --git a/projects/opendr_ws/src/perception/scripts/face_recognition.py b/projects/opendr_ws/src/opendr_perception/scripts/face_recognition_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/face_recognition.py rename to projects/opendr_ws/src/opendr_perception/scripts/face_recognition_node.py diff --git a/projects/opendr_ws/src/perception/scripts/fall_detection.py b/projects/opendr_ws/src/opendr_perception/scripts/fall_detection_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/fall_detection.py rename to projects/opendr_ws/src/opendr_perception/scripts/fall_detection_node.py diff --git a/projects/opendr_ws/src/perception/scripts/heart_anomaly_detection.py b/projects/opendr_ws/src/opendr_perception/scripts/heart_anomaly_detection_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/heart_anomaly_detection.py rename to projects/opendr_ws/src/opendr_perception/scripts/heart_anomaly_detection_node.py diff --git a/projects/opendr_ws/src/perception/scripts/image_dataset.py b/projects/opendr_ws/src/opendr_perception/scripts/image_dataset_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/image_dataset.py rename to projects/opendr_ws/src/opendr_perception/scripts/image_dataset_node.py diff --git a/projects/opendr_ws/src/perception/scripts/landmark_based_facial_expression_recognition.py b/projects/opendr_ws/src/opendr_perception/scripts/landmark_based_facial_expression_recognition_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/landmark_based_facial_expression_recognition.py rename to projects/opendr_ws/src/opendr_perception/scripts/landmark_based_facial_expression_recognition_node.py diff --git a/projects/opendr_ws/src/perception/scripts/object_detection_2d_centernet.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_centernet_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/object_detection_2d_centernet.py rename to projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_centernet_node.py diff --git a/projects/opendr_ws/src/perception/scripts/object_detection_2d_detr.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_detr_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/object_detection_2d_detr.py rename to projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_detr_node.py diff --git a/projects/opendr_ws/src/perception/scripts/object_detection_2d_gem.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_gem_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/object_detection_2d_gem.py rename to projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_gem_node.py diff --git a/projects/opendr_ws/src/perception/scripts/object_detection_2d_nanodet.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_nanodet_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/object_detection_2d_nanodet.py rename to projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_nanodet_node.py diff --git a/projects/opendr_ws/src/perception/scripts/object_detection_2d_ssd.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_ssd_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/object_detection_2d_ssd.py rename to projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_ssd_node.py diff --git a/projects/opendr_ws/src/perception/scripts/object_detection_2d_yolov3.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_yolov3_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/object_detection_2d_yolov3.py rename to projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_yolov3_node.py diff --git a/projects/opendr_ws/src/perception/scripts/object_detection_3d_voxel.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_3d_voxel_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/object_detection_3d_voxel.py rename to projects/opendr_ws/src/opendr_perception/scripts/object_detection_3d_voxel_node.py diff --git a/projects/opendr_ws/src/perception/scripts/object_tracking_2d_deep_sort.py b/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_deep_sort_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/object_tracking_2d_deep_sort.py rename to projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_deep_sort_node.py diff --git a/projects/opendr_ws/src/perception/scripts/object_tracking_2d_fair_mot.py b/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_fair_mot_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/object_tracking_2d_fair_mot.py rename to projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_fair_mot_node.py diff --git a/projects/opendr_ws/src/perception/scripts/object_tracking_3d_ab3dmot.py b/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_3d_ab3dmot_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/object_tracking_3d_ab3dmot.py rename to projects/opendr_ws/src/opendr_perception/scripts/object_tracking_3d_ab3dmot_node.py diff --git a/projects/opendr_ws/src/perception/scripts/panoptic_segmentation_efficient_ps.py b/projects/opendr_ws/src/opendr_perception/scripts/panoptic_segmentation_efficient_ps_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/panoptic_segmentation_efficient_ps.py rename to projects/opendr_ws/src/opendr_perception/scripts/panoptic_segmentation_efficient_ps_node.py diff --git a/projects/opendr_ws/src/perception/scripts/point_cloud_dataset.py b/projects/opendr_ws/src/opendr_perception/scripts/point_cloud_dataset_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/point_cloud_dataset.py rename to projects/opendr_ws/src/opendr_perception/scripts/point_cloud_dataset_node.py diff --git a/projects/opendr_ws/src/perception/scripts/pose_estimation.py b/projects/opendr_ws/src/opendr_perception/scripts/pose_estimation_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/pose_estimation.py rename to projects/opendr_ws/src/opendr_perception/scripts/pose_estimation_node.py diff --git a/projects/opendr_ws/src/perception/scripts/rgbd_hand_gesture_recognition.py b/projects/opendr_ws/src/opendr_perception/scripts/rgbd_hand_gesture_recognition_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/rgbd_hand_gesture_recognition.py rename to projects/opendr_ws/src/opendr_perception/scripts/rgbd_hand_gesture_recognition_node.py diff --git a/projects/opendr_ws/src/perception/scripts/semantic_segmentation_bisenet.py b/projects/opendr_ws/src/opendr_perception/scripts/semantic_segmentation_bisenet_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/semantic_segmentation_bisenet.py rename to projects/opendr_ws/src/opendr_perception/scripts/semantic_segmentation_bisenet_node.py diff --git a/projects/opendr_ws/src/perception/scripts/skeleton_based_action_recognition.py b/projects/opendr_ws/src/opendr_perception/scripts/skeleton_based_action_recognition_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/skeleton_based_action_recognition.py rename to projects/opendr_ws/src/opendr_perception/scripts/skeleton_based_action_recognition_node.py diff --git a/projects/opendr_ws/src/perception/scripts/speech_command_recognition.py b/projects/opendr_ws/src/opendr_perception/scripts/speech_command_recognition_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/speech_command_recognition.py rename to projects/opendr_ws/src/opendr_perception/scripts/speech_command_recognition_node.py diff --git a/projects/opendr_ws/src/perception/scripts/video_activity_recognition.py b/projects/opendr_ws/src/opendr_perception/scripts/video_activity_recognition_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/video_activity_recognition.py rename to projects/opendr_ws/src/opendr_perception/scripts/video_activity_recognition_node.py diff --git a/projects/opendr_ws/src/perception/src/.keep b/projects/opendr_ws/src/opendr_perception/src/.keep similarity index 100% rename from projects/opendr_ws/src/perception/src/.keep rename to projects/opendr_ws/src/opendr_perception/src/.keep From 359917911a12b1bf1cdaa16dbd7f3145094c7e5e Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 30 Nov 2022 14:28:43 +0200 Subject: [PATCH 42/57] Applied fixes to yolov5 --- .../scripts/object_detection_2d_yolov5_node.py} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename projects/opendr_ws/src/{perception/scripts/object_detection_2d_yolov5.py => opendr_perception/scripts/object_detection_2d_yolov5_node.py} (100%) diff --git a/projects/opendr_ws/src/perception/scripts/object_detection_2d_yolov5.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_yolov5_node.py similarity index 100% rename from projects/opendr_ws/src/perception/scripts/object_detection_2d_yolov5.py rename to projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_yolov5_node.py From ccacac0839e1b6927704f3fd59a8a2acbaac3415 Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 30 Nov 2022 14:40:29 +0200 Subject: [PATCH 43/57] Added "opendr_" to planning package --- .../opendr_ws/src/{planning => opendr_planning}/CMakeLists.txt | 2 +- .../planning => opendr_planning/include/opendr_planning}/.keep | 0 .../opendr_ws/src/{planning => opendr_planning}/package.xml | 2 +- .../scripts/end_to_end_planner_node.py} | 0 projects/opendr_ws/src/{planning => opendr_planning}/src/.keep | 0 .../panoptic_segmentation/efficient_ps/algorithm/EfficientPS | 2 +- 6 files changed, 3 insertions(+), 3 deletions(-) rename projects/opendr_ws/src/{planning => opendr_planning}/CMakeLists.txt (87%) rename projects/opendr_ws/src/{planning/include/planning => opendr_planning/include/opendr_planning}/.keep (100%) rename projects/opendr_ws/src/{planning => opendr_planning}/package.xml (95%) rename projects/opendr_ws/src/{planning/scripts/end_to_end_planner.py => opendr_planning/scripts/end_to_end_planner_node.py} (100%) rename projects/opendr_ws/src/{planning => opendr_planning}/src/.keep (100%) diff --git a/projects/opendr_ws/src/planning/CMakeLists.txt b/projects/opendr_ws/src/opendr_planning/CMakeLists.txt similarity index 87% rename from projects/opendr_ws/src/planning/CMakeLists.txt rename to projects/opendr_ws/src/opendr_planning/CMakeLists.txt index edc581316a..f6f9a5900a 100644 --- a/projects/opendr_ws/src/planning/CMakeLists.txt +++ b/projects/opendr_ws/src/opendr_planning/CMakeLists.txt @@ -1,5 +1,5 @@ cmake_minimum_required(VERSION 3.0.2) -project(planning) +project(opendr_planning) find_package(catkin REQUIRED COMPONENTS roscpp diff --git a/projects/opendr_ws/src/planning/include/planning/.keep b/projects/opendr_ws/src/opendr_planning/include/opendr_planning/.keep similarity index 100% rename from projects/opendr_ws/src/planning/include/planning/.keep rename to projects/opendr_ws/src/opendr_planning/include/opendr_planning/.keep diff --git a/projects/opendr_ws/src/planning/package.xml b/projects/opendr_ws/src/opendr_planning/package.xml similarity index 95% rename from projects/opendr_ws/src/planning/package.xml rename to projects/opendr_ws/src/opendr_planning/package.xml index 51a8f55570..59483b0963 100644 --- a/projects/opendr_ws/src/planning/package.xml +++ b/projects/opendr_ws/src/opendr_planning/package.xml @@ -1,6 +1,6 @@ - planning + opendr_planning 1.0.0 OpenDR's ROS planning package OpenDR Project Coordinator diff --git a/projects/opendr_ws/src/planning/scripts/end_to_end_planner.py b/projects/opendr_ws/src/opendr_planning/scripts/end_to_end_planner_node.py similarity index 100% rename from projects/opendr_ws/src/planning/scripts/end_to_end_planner.py rename to projects/opendr_ws/src/opendr_planning/scripts/end_to_end_planner_node.py diff --git a/projects/opendr_ws/src/planning/src/.keep b/projects/opendr_ws/src/opendr_planning/src/.keep similarity index 100% rename from projects/opendr_ws/src/planning/src/.keep rename to projects/opendr_ws/src/opendr_planning/src/.keep diff --git a/src/opendr/perception/panoptic_segmentation/efficient_ps/algorithm/EfficientPS b/src/opendr/perception/panoptic_segmentation/efficient_ps/algorithm/EfficientPS index d03deab54e..e1c92c301b 160000 --- a/src/opendr/perception/panoptic_segmentation/efficient_ps/algorithm/EfficientPS +++ b/src/opendr/perception/panoptic_segmentation/efficient_ps/algorithm/EfficientPS @@ -1 +1 @@ -Subproject commit d03deab54edc5da15ed63318b3d1b14fb9712441 +Subproject commit e1c92c301b8d2a9c582797ab3cad203909f2fa9d From 5baa98b0c8679ad8afd4d064c28bb2884ccf92d2 Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 30 Nov 2022 14:45:23 +0200 Subject: [PATCH 44/57] Added "opendr_" to bridge package --- .../src/{ros_bridge => opendr_ros_bridge}/CMakeLists.txt | 2 +- .../include/opendr_ros_bridge}/.keep | 0 .../src/{ros_bridge => opendr_ros_bridge}/msg/OpenDRPose2D.msg | 0 .../msg/OpenDRPose2DKeypoint.msg | 0 .../opendr_ws/src/{ros_bridge => opendr_ros_bridge}/package.xml | 2 +- .../opendr_ws/src/{ros_bridge => opendr_ros_bridge}/setup.py | 2 +- .../src/opendr_bridge/__init__.py | 0 .../src/opendr_bridge/bridge.py | 0 8 files changed, 3 insertions(+), 3 deletions(-) rename projects/opendr_ws/src/{ros_bridge => opendr_ros_bridge}/CMakeLists.txt (96%) rename projects/opendr_ws/src/{ros_bridge/include/ros_bridge => opendr_ros_bridge/include/opendr_ros_bridge}/.keep (100%) rename projects/opendr_ws/src/{ros_bridge => opendr_ros_bridge}/msg/OpenDRPose2D.msg (100%) rename projects/opendr_ws/src/{ros_bridge => opendr_ros_bridge}/msg/OpenDRPose2DKeypoint.msg (100%) rename projects/opendr_ws/src/{ros_bridge => opendr_ros_bridge}/package.xml (96%) rename projects/opendr_ws/src/{ros_bridge => opendr_ros_bridge}/setup.py (90%) rename projects/opendr_ws/src/{ros_bridge => opendr_ros_bridge}/src/opendr_bridge/__init__.py (100%) rename projects/opendr_ws/src/{ros_bridge => opendr_ros_bridge}/src/opendr_bridge/bridge.py (100%) diff --git a/projects/opendr_ws/src/ros_bridge/CMakeLists.txt b/projects/opendr_ws/src/opendr_ros_bridge/CMakeLists.txt similarity index 96% rename from projects/opendr_ws/src/ros_bridge/CMakeLists.txt rename to projects/opendr_ws/src/opendr_ros_bridge/CMakeLists.txt index f66066c41f..54147a8b87 100644 --- a/projects/opendr_ws/src/ros_bridge/CMakeLists.txt +++ b/projects/opendr_ws/src/opendr_ros_bridge/CMakeLists.txt @@ -1,5 +1,5 @@ cmake_minimum_required(VERSION 3.0.2) -project(ros_bridge) +project(opendr_ros_bridge) find_package(catkin REQUIRED COMPONENTS roscpp diff --git a/projects/opendr_ws/src/ros_bridge/include/ros_bridge/.keep b/projects/opendr_ws/src/opendr_ros_bridge/include/opendr_ros_bridge/.keep similarity index 100% rename from projects/opendr_ws/src/ros_bridge/include/ros_bridge/.keep rename to projects/opendr_ws/src/opendr_ros_bridge/include/opendr_ros_bridge/.keep diff --git a/projects/opendr_ws/src/ros_bridge/msg/OpenDRPose2D.msg b/projects/opendr_ws/src/opendr_ros_bridge/msg/OpenDRPose2D.msg similarity index 100% rename from projects/opendr_ws/src/ros_bridge/msg/OpenDRPose2D.msg rename to projects/opendr_ws/src/opendr_ros_bridge/msg/OpenDRPose2D.msg diff --git a/projects/opendr_ws/src/ros_bridge/msg/OpenDRPose2DKeypoint.msg b/projects/opendr_ws/src/opendr_ros_bridge/msg/OpenDRPose2DKeypoint.msg similarity index 100% rename from projects/opendr_ws/src/ros_bridge/msg/OpenDRPose2DKeypoint.msg rename to projects/opendr_ws/src/opendr_ros_bridge/msg/OpenDRPose2DKeypoint.msg diff --git a/projects/opendr_ws/src/ros_bridge/package.xml b/projects/opendr_ws/src/opendr_ros_bridge/package.xml similarity index 96% rename from projects/opendr_ws/src/ros_bridge/package.xml rename to projects/opendr_ws/src/opendr_ros_bridge/package.xml index e9cb01afb1..3d1927e737 100644 --- a/projects/opendr_ws/src/ros_bridge/package.xml +++ b/projects/opendr_ws/src/opendr_ros_bridge/package.xml @@ -1,6 +1,6 @@ - ros_bridge + opendr_ros_bridge 1.1.1 OpenDR ros_bridge package. This package provides a way to translate ROS messages into OpenDR data types and vice versa. diff --git a/projects/opendr_ws/src/ros_bridge/setup.py b/projects/opendr_ws/src/opendr_ros_bridge/setup.py similarity index 90% rename from projects/opendr_ws/src/ros_bridge/setup.py rename to projects/opendr_ws/src/opendr_ros_bridge/setup.py index b5479915ae..974b64f1c4 100644 --- a/projects/opendr_ws/src/ros_bridge/setup.py +++ b/projects/opendr_ws/src/opendr_ros_bridge/setup.py @@ -15,7 +15,7 @@ from distutils.core import setup from catkin_pkg.python_setup import generate_distutils_setup d = generate_distutils_setup( - packages=['opendr_bridge'], + packages=['opendr_bridge'], # comment for review: should this change? package_dir={'': 'src'} ) setup(**d) diff --git a/projects/opendr_ws/src/ros_bridge/src/opendr_bridge/__init__.py b/projects/opendr_ws/src/opendr_ros_bridge/src/opendr_bridge/__init__.py similarity index 100% rename from projects/opendr_ws/src/ros_bridge/src/opendr_bridge/__init__.py rename to projects/opendr_ws/src/opendr_ros_bridge/src/opendr_bridge/__init__.py diff --git a/projects/opendr_ws/src/ros_bridge/src/opendr_bridge/bridge.py b/projects/opendr_ws/src/opendr_ros_bridge/src/opendr_bridge/bridge.py similarity index 100% rename from projects/opendr_ws/src/ros_bridge/src/opendr_bridge/bridge.py rename to projects/opendr_ws/src/opendr_ros_bridge/src/opendr_bridge/bridge.py From d009102b95852256a2f5e14e384b04736780ba1a Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 30 Nov 2022 15:06:34 +0200 Subject: [PATCH 45/57] Added "opendr_" to simulation package --- .../src/{simulation => opendr_simulation}/CMakeLists.txt | 2 +- .../opendr_ws/src/{simulation => opendr_simulation}/README.md | 0 .../opendr_ws/src/{simulation => opendr_simulation}/package.xml | 2 +- .../scripts/human_model_generation_client.py | 0 .../scripts/human_model_generation_service.py | 0 .../src/{simulation => opendr_simulation}/srv/Mesh_vc.srv | 0 6 files changed, 2 insertions(+), 2 deletions(-) rename projects/opendr_ws/src/{simulation => opendr_simulation}/CMakeLists.txt (96%) rename projects/opendr_ws/src/{simulation => opendr_simulation}/README.md (100%) rename projects/opendr_ws/src/{simulation => opendr_simulation}/package.xml (96%) rename projects/opendr_ws/src/{simulation => opendr_simulation}/scripts/human_model_generation_client.py (100%) rename projects/opendr_ws/src/{simulation => opendr_simulation}/scripts/human_model_generation_service.py (100%) rename projects/opendr_ws/src/{simulation => opendr_simulation}/srv/Mesh_vc.srv (100%) diff --git a/projects/opendr_ws/src/simulation/CMakeLists.txt b/projects/opendr_ws/src/opendr_simulation/CMakeLists.txt similarity index 96% rename from projects/opendr_ws/src/simulation/CMakeLists.txt rename to projects/opendr_ws/src/opendr_simulation/CMakeLists.txt index 5b25717dee..403bbf6c0e 100644 --- a/projects/opendr_ws/src/simulation/CMakeLists.txt +++ b/projects/opendr_ws/src/opendr_simulation/CMakeLists.txt @@ -1,5 +1,5 @@ cmake_minimum_required(VERSION 3.0.2) -project(simulation) +project(opendr_simulation) find_package(catkin REQUIRED COMPONENTS roscpp diff --git a/projects/opendr_ws/src/simulation/README.md b/projects/opendr_ws/src/opendr_simulation/README.md similarity index 100% rename from projects/opendr_ws/src/simulation/README.md rename to projects/opendr_ws/src/opendr_simulation/README.md diff --git a/projects/opendr_ws/src/simulation/package.xml b/projects/opendr_ws/src/opendr_simulation/package.xml similarity index 96% rename from projects/opendr_ws/src/simulation/package.xml rename to projects/opendr_ws/src/opendr_simulation/package.xml index cd9795529b..2affad0524 100644 --- a/projects/opendr_ws/src/simulation/package.xml +++ b/projects/opendr_ws/src/opendr_simulation/package.xml @@ -1,6 +1,6 @@ - simulation + opendr_simulation 1.1.1 OpenDR's ROS nodes for simulation package OpenDR Project Coordinator diff --git a/projects/opendr_ws/src/simulation/scripts/human_model_generation_client.py b/projects/opendr_ws/src/opendr_simulation/scripts/human_model_generation_client.py similarity index 100% rename from projects/opendr_ws/src/simulation/scripts/human_model_generation_client.py rename to projects/opendr_ws/src/opendr_simulation/scripts/human_model_generation_client.py diff --git a/projects/opendr_ws/src/simulation/scripts/human_model_generation_service.py b/projects/opendr_ws/src/opendr_simulation/scripts/human_model_generation_service.py similarity index 100% rename from projects/opendr_ws/src/simulation/scripts/human_model_generation_service.py rename to projects/opendr_ws/src/opendr_simulation/scripts/human_model_generation_service.py diff --git a/projects/opendr_ws/src/simulation/srv/Mesh_vc.srv b/projects/opendr_ws/src/opendr_simulation/srv/Mesh_vc.srv similarity index 100% rename from projects/opendr_ws/src/simulation/srv/Mesh_vc.srv rename to projects/opendr_ws/src/opendr_simulation/srv/Mesh_vc.srv From aeeb333c4d25065f74fce1ff5b91d24ed8c078d4 Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 30 Nov 2022 15:18:51 +0200 Subject: [PATCH 46/57] Fixed old version of torch in pip_requirements.txt --- dependencies/pip_requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dependencies/pip_requirements.txt b/dependencies/pip_requirements.txt index 76a12feebd..25f161195f 100644 --- a/dependencies/pip_requirements.txt +++ b/dependencies/pip_requirements.txt @@ -1,6 +1,6 @@ numpy==1.17.5 Cython -torch==1.7.1 +torch==1.9.0 wheel git+https://github.com/cidl-auth/cocoapi@03ee5a19844e253b8365dbbf35c1e5d8ca2e7281#subdirectory=PythonAPI git+https://github.com/cocodataset/panopticapi.git@7bb4655548f98f3fedc07bf37e9040a992b054b0 From d11b710f7deec35d84522c8489dfc1c17a099503 Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 30 Nov 2022 15:19:09 +0200 Subject: [PATCH 47/57] Renamed ros bridge package doc --- docs/reference/{rosbridge.md => opendr-ros-bridge.md} | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename docs/reference/{rosbridge.md => opendr-ros-bridge.md} (98%) diff --git a/docs/reference/rosbridge.md b/docs/reference/opendr-ros-bridge.md similarity index 98% rename from docs/reference/rosbridge.md rename to docs/reference/opendr-ros-bridge.md index d0c155e4d7..bb46533c00 100755 --- a/docs/reference/rosbridge.md +++ b/docs/reference/opendr-ros-bridge.md @@ -1,7 +1,7 @@ -## ROSBridge Package +## opendr_ros_bridge package -This *ROSBridge* package provides an interface to convert OpenDR data types and targets into ROS-compatible ones similar to CvBridge. +This *opendr_ros_bridge* package provides an interface to convert OpenDR data types and targets into ROS-compatible ones similar to CvBridge. The *ROSBridge* class provides two methods for each data type X: 1. *from_ros_X()* : converts the ROS equivalent of X into OpenDR data type 2. *to_ros_X()* : converts the OpenDR data type into the ROS equivalent of X From 18fc19c8f596926641722d23bdb36ae898d3ef3b Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Thu, 1 Dec 2022 16:13:55 +0200 Subject: [PATCH 48/57] Updated based on new names and some minor modifications --- projects/opendr_ws/README.md | 44 +++--- .../opendr_ws/src/opendr_perception/README.md | 129 ++++++++++-------- 2 files changed, 93 insertions(+), 80 deletions(-) diff --git a/projects/opendr_ws/README.md b/projects/opendr_ws/README.md index 5f1a884311..ad5608207a 100755 --- a/projects/opendr_ws/README.md +++ b/projects/opendr_ws/README.md @@ -2,9 +2,9 @@ ## Description This ROS workspace contains ROS nodes and tools developed by OpenDR project. Currently, ROS nodes are compatible with ROS Noetic. -This workspace contains the `ros_bridge` package, which provides message definitions for ROS-compatible OpenDR data types, +This workspace contains the `opendr_ros_bridge` package, which provides message definitions for ROS-compatible OpenDR data types, as well the `ROSBridge` class which provides an interface to convert OpenDR data types and targets into ROS-compatible -ones similar to CvBridge. You can find more information in the corresponding [documentation](../../docs/reference/rosbridge.md). +ones similar to CvBridge. You can find more information in the corresponding [documentation](../../docs/reference/opendr-ros-bridge.md). ## First time setup @@ -55,7 +55,7 @@ For the initial setup you can follow the instructions below: For running OpenDR nodes after you have completed the initial setup, you can skip steps 2. and 5. from the list above. You can also skip building the workspace (step 6.) granted it's been already built and no changes were made to the code inside the workspace, e.g. you modified the source code of a node. #### More information -After completing the setup you can read more information on the [perception package README](src/perception/README.md), where you can find a concise list of prerequisites and helpful notes to view the output of the nodes or optimize their performance. +After completing the setup you can read more information on the [opendr perception package README](src/opendr_perception/README.md), where you can find a concise list of prerequisites and helpful notes to view the output of the nodes or optimize their performance. #### Node documentation You can also take a look at the list of tools [below](#structure) and click on the links to navigate directly to documentation for specific nodes with instructions on how to run and modify them. @@ -66,29 +66,29 @@ You can also take a look at the list of tools [below](#structure) and click on t Currently, apart from tools, opendr_ws contains the following ROS nodes (categorized according to the input they receive): -### [Perception](src/perception/README.md) +### [Perception](src/opendr_perception/README.md) ## RGB input -1. [Pose Estimation](src/perception/README.md#pose-estimation-ros-node) -2. [Fall Detection](src/perception/README.md#fall-detection-ros-node) -3. [Face Detection](src/perception/README.md#face-detection-ros-node) -4. [Face Recognition](src/perception/README.md#face-recognition-ros-node) -5. [2D Object Detection](src/perception/README.md#2d-object-detection-ros-nodes) -6. [2D Object Tracking](src/perception/README.md#2d-object-tracking-ros-nodes) -8. [Panoptic Segmentation](src/perception/README.md#panoptic-segmentation-ros-node) -9. [Semantic Segmentation](src/perception/README.md#semantic-segmentation-ros-node) -10. [Landmark-based Facial Expression Recognition](src/perception/README.md#landmark-based-facial-expression-recognition-ros-node) -11. [Skeleton-based Human Action Recognition](src/perception/README.md#skeleton-based-human-action-recognition-ros-node) -12. [Video Human Activity Recognition](src/perception/README.md#video-human-activity-recognition-ros-node) +1. [Pose Estimation](src/opendr_perception/README.md#pose-estimation-ros-node) +2. [Fall Detection](src/opendr_perception/README.md#fall-detection-ros-node) +3. [Face Detection](src/opendr_perception/README.md#face-detection-ros-node) +4. [Face Recognition](src/opendr_perception/README.md#face-recognition-ros-node) +5. [2D Object Detection](src/opendr_perception/README.md#2d-object-detection-ros-nodes) +6. [2D Object Tracking](src/opendr_perception/README.md#2d-object-tracking-ros-nodes) +8. [Panoptic Segmentation](src/opendr_perception/README.md#panoptic-segmentation-ros-node) +9. [Semantic Segmentation](src/opendr_perception/README.md#semantic-segmentation-ros-node) +10. [Landmark-based Facial Expression Recognition](src/opendr_perception/README.md#landmark-based-facial-expression-recognition-ros-node) +11. [Skeleton-based Human Action Recognition](src/opendr_perception/README.md#skeleton-based-human-action-recognition-ros-node) +12. [Video Human Activity Recognition](src/opendr_perception/README.md#video-human-activity-recognition-ros-node) ## RGB + Infrared input -1. [End-to-End Multi-Modal Object Detection (GEM)](src/perception/README.md#2d-object-detection-gem-ros-node) +1. [End-to-End Multi-Modal Object Detection (GEM)](src/opendr_perception/README.md#2d-object-detection-gem-ros-node) ## RGBD input -1. [RGBD Hand Gesture Recognition](src/perception/README.md#rgbd-hand-gesture-recognition-ros-node) +1. [RGBD Hand Gesture Recognition](src/opendr_perception/README.md#rgbd-hand-gesture-recognition-ros-node) ## RGB + Audio input -1. [Audiovisual Emotion Recognition](src/perception/README.md#audiovisual-emotion-recognition-ros-node) +1. [Audiovisual Emotion Recognition](src/opendr_perception/README.md#audiovisual-emotion-recognition-ros-node) ## Audio input -1. [Speech Command Recognition](src/perception/README.md#speech-command-recognition-ros-node) +1. [Speech Command Recognition](src/opendr_perception/README.md#speech-command-recognition-ros-node) ## Point cloud input -1. [3D Object Detection Voxel](src/perception/README.md#3d-object-detection-voxel-ros-node) -2. [3D Object Tracking AB3DMOT](src/perception/README.md#3d-object-tracking-ab3dmot-ros-node) +1. [3D Object Detection Voxel](src/opendr_perception/README.md#3d-object-detection-voxel-ros-node) +2. [3D Object Tracking AB3DMOT](src/opendr_perception/README.md#3d-object-tracking-ab3dmot-ros-node) ## Biosignal input -1. [Heart Anomaly Detection](src/perception/README.md#heart-anomaly-detection-ros-node) +1. [Heart Anomaly Detection](src/opendr_perception/README.md#heart-anomaly-detection-ros-node) diff --git a/projects/opendr_ws/src/opendr_perception/README.md b/projects/opendr_ws/src/opendr_perception/README.md index 8083685cab..5119e11621 100644 --- a/projects/opendr_ws/src/opendr_perception/README.md +++ b/projects/opendr_ws/src/opendr_perception/README.md @@ -1,4 +1,4 @@ -# Perception Package +# OpenDR Perception Package This package contains ROS nodes related to the perception package of OpenDR. @@ -6,7 +6,7 @@ This package contains ROS nodes related to the perception package of OpenDR. ## Prerequisites -Before you can run any of the toolkit's ROS nodes, some prerequisites need to be fulfilled: +Before you can run any of the package's ROS nodes, some prerequisites need to be fulfilled: 1. First of all, you need to [set up the required packages and build your workspace.](../../README.md#first-time-setup) 2. Start roscore by opening a new terminal where ROS is sourced properly (`source /opt/ros/noetic/setup.bash`) and run `roscore`, if you haven't already done so. 3. _(Optional for nodes with [RGB input](#rgb-input-nodes))_ @@ -52,9 +52,9 @@ Before you can run any of the toolkit's ROS nodes, some prerequisites need to be ### Pose Estimation ROS Node -You can find the pose estimation ROS node python script [here](./scripts/pose_estimation.py) to inspect the code and modify it as you wish to fit your needs. +You can find the pose estimation ROS node python script [here](./scripts/pose_estimation_node.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [pose estimation tool](../../../../src/opendr/perception/pose_estimation/lightweight_open_pose/lightweight_open_pose_learner.py) whose documentation can be found [here](../../../../docs/reference/lightweight-open-pose.md). -The node publishes the detected poses in [OpenDR's 2D pose message format](../ros_bridge/msg/OpenDRPose2D.msg), which saves a list of [OpenDR's keypoint message format](../ros_bridge/msg/OpenDRPose2DKeypoint.msg). +The node publishes the detected poses in [OpenDR's 2D pose message format](../opendr_ros_bridge/msg/OpenDRPose2D.msg), which saves a list of [OpenDR's keypoint message format](../opendr_ros_bridge/msg/OpenDRPose2DKeypoint.msg). #### Instructions for basic usage: @@ -62,7 +62,7 @@ The node publishes the detected poses in [OpenDR's 2D pose message format](../ro 2. You are then ready to start the pose detection node: ```shell - rosrun perception pose_estimation.py + rosrun perception pose_estimation_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -80,7 +80,7 @@ The node publishes the detected poses in [OpenDR's 2D pose message format](../ro ### Fall Detection ROS Node -You can find the fall detection ROS node python script [here](./scripts/fall_detection.py) to inspect the code and modify it as you wish to fit your needs. +You can find the fall detection ROS node python script [here](./scripts/fall_detection_node.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [fall detection tool](../../../../src/opendr/perception/fall_detection/fall_detector_learner.py) whose documentation can be found [here](../../../../docs/reference/fall-detection.md). Fall detection uses the toolkit's pose estimation tool internally. @@ -93,7 +93,7 @@ Fall detection uses the toolkit's pose estimation tool internally. 2. You are then ready to start the fall detection node: ```shell - rosrun perception fall_detection.py + rosrun perception fall_detection_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -113,7 +113,7 @@ Fall detection uses the toolkit's pose estimation tool internally. The face detection ROS node supports both the ResNet and MobileNet versions, the latter of which performs masked face detection as well. -You can find the face detection ROS node python script [here](./scripts/face_detection_retinaface.py) to inspect the code and modify it as you wish to fit your needs. +You can find the face detection ROS node python script [here](./scripts/face_detection_retinaface_node.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [face detection tool](../../../../src/opendr/perception/object_detection_2d/retinaface/retinaface_learner.py) whose documentation can be found [here](../../../../docs/reference/face-detection-2d-retinaface.md). #### Instructions for basic usage: @@ -123,7 +123,7 @@ The node makes use of the toolkit's [face detection tool](../../../../src/opendr 2. You are then ready to start the face detection node ```shell - rosrun perception face_detection_retinaface.py + rosrun perception face_detection_retinaface_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -141,7 +141,7 @@ The node makes use of the toolkit's [face detection tool](../../../../src/opendr ### Face Recognition ROS Node -You can find the face recognition ROS node python script [here](./scripts/face_recognition.py) to inspect the code and modify it as you wish to fit your needs. +You can find the face recognition ROS node python script [here](./scripts/face_recognition_node.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [face recognition tool](../../../../src/opendr/perception/face_recognition/face_recognition_learner.py) whose documentation can be found [here](../../../../docs/reference/face-recognition.md). #### Instructions for basic usage: @@ -151,7 +151,7 @@ The node makes use of the toolkit's [face recognition tool](../../../../src/open 2. You are then ready to start the face recognition node: ```shell - rosrun perception face_recognition.py + rosrun perception face_recognition_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -188,13 +188,17 @@ under `/opendr/face_recognition_id`. ### 2D Object Detection ROS Nodes -For 2D object detection, there are several ROS nodes implemented using various algorithms. The generic object detectors are SSD, YOLOv3, CenterNet and DETR. +For 2D object detection, there are several ROS nodes implemented using various algorithms. The generic object detectors are SSD, YOLOv3, YOLOv5, CenterNet and DETR. -You can find the 2D object detection ROS node python scripts here: [SSD node](./scripts/object_detection_2d_ssd.py), [YOLOv3 node](./scripts/object_detection_2d_yolov3.py), [CenterNet node](./scripts/object_detection_2d_centernet.py) and [DETR node](./scripts/object_detection_2d_detr.py), +You can find the 2D object detection ROS node python scripts here: +[SSD node](./scripts/object_detection_2d_ssd_node.py), [YOLOv3 node](./scripts/object_detection_2d_yolov3_node.py), [YOLOv5 node](./scripts/object_detection_2d_yolov5_node.py), [CenterNet node](./scripts/object_detection_2d_centernet_node.py) and [DETR node](./scripts/object_detection_2d_detr_node.py), where you can inspect the code and modify it as you wish to fit your needs. -The nodes makes use of the toolkit's various 2D object detection tools: [SSD tool](../../../../src/opendr/perception/object_detection_2d/ssd/ssd_learner.py), [YOLOv3 tool](../../../../src/opendr/perception/object_detection_2d/yolov3/yolov3_learner.py), -[CenterNet tool](../../../../src/opendr/perception/object_detection_2d/centernet/centernet_learner.py), [DETR tool](../../../../src/opendr/perception/object_detection_2d/detr/detr_learner.py), whose documentation can be found here: [SSD docs](../../../../docs/reference/object-detection-2d-ssd.md), -[YOLOv3 docs](../../../../docs/reference/object-detection-2d-yolov3.md), [CenterNet docs](../../../../docs/reference/object-detection-2d-centernet.md), [DETR docs](../../../../docs/reference/detr.md). +The nodes makes use of the toolkit's various 2D object detection tools: +[SSD tool](../../../../src/opendr/perception/object_detection_2d/ssd/ssd_learner.py), [YOLOv3 tool](../../../../src/opendr/perception/object_detection_2d/yolov3/yolov3_learner.py), [YOLOv5 tool](../../../../src/opendr/perception/object_detection_2d/yolov5/yolov5_learner.py), +[CenterNet tool](../../../../src/opendr/perception/object_detection_2d/centernet/centernet_learner.py), [DETR tool](../../../../src/opendr/perception/object_detection_2d/detr/detr_learner.py), +whose documentation can be found here: +[SSD docs](../../../../docs/reference/object-detection-2d-ssd.md), [YOLOv3 docs](../../../../docs/reference/object-detection-2d-yolov3.md), [YOLOv5 docs](../../../../docs/reference/object-detection-2d-yolov5.md), +[CenterNet docs](../../../../docs/reference/object-detection-2d-centernet.md), [DETR docs](../../../../docs/reference/detr.md). #### Instructions for basic usage: @@ -203,7 +207,7 @@ The nodes makes use of the toolkit's various 2D object detection tools: [SSD too 2. You are then ready to start a 2D object detector node: 1. SSD node ```shell - rosrun perception object_detection_2d_ssd.py + rosrun perception object_detection_2d_ssd_node.py ``` The following optional arguments are available for the SSD node: - `--backbone BACKBONE`: Backbone network (default=`vgg16_atrous`) @@ -211,21 +215,28 @@ The nodes makes use of the toolkit's various 2D object detection tools: [SSD too 2. YOLOv3 node ```shell - rosrun perception object_detection_2d_yolov3.py + rosrun perception object_detection_2d_yolov3_node.py ``` The following optional argument is available for the YOLOv3 node: - `--backbone BACKBONE`: Backbone network (default=`darknet53`) - 3. CenterNet node + 3. YOLOv5 node ```shell - rosrun perception object_detection_2d_centernet.py + rosrun perception object_detection_2d_yolov5_node.py + ``` + The following optional argument is available for the YOLOv5 node: + - `--model_name MODEL_NAME`: Network architecture, options are `yolov5s`, `yolov5n`, `yolov5m`, `yolov5l`, `yolov5x`, `yolov5n6`, `yolov5s6`, `yolov5m6`, `yolov5l6`, `custom` (default=`yolov5s`) + + 4. CenterNet node + ```shell + rosrun perception object_detection_2d_centernet_node.py ``` The following optional argument is available for the YOLOv3 node: - `--backbone BACKBONE`: Backbone network (default=`resnet50_v1b`) - 4. DETR node + 5. DETR node ```shell - rosrun perception object_detection_2d_detr.py + rosrun perception object_detection_2d_detr_node.py ``` The following optional arguments are available for all nodes above: @@ -246,7 +257,7 @@ The nodes makes use of the toolkit's various 2D object detection tools: [SSD too For 2D object tracking, there two ROS nodes provided, one using Deep Sort and one using FairMOT which use either pretrained models, or custom trained models. The predicted tracking annotations are split into two topics with detections and tracking IDs. Additionally, an annotated image is generated. -You can find the 2D object detection ROS node python scripts here: [Deep Sort node](./scripts/object_tracking_2d_deep_sort.py) and [FairMOT node](./scripts/object_tracking_2d_fair_mot.py) +You can find the 2D object detection ROS node python scripts here: [Deep Sort node](./scripts/object_tracking_2d_deep_sort_node.py) and [FairMOT node](./scripts/object_tracking_2d_fair_mot_node.py) where you can inspect the code and modify it as you wish to fit your needs. The nodes makes use of the toolkit's [object tracking 2D - Deep Sort tool](../../../../src/opendr/perception/object_tracking_2d/deep_sort/object_tracking_2d_deep_sort_learner.py) and [object tracking 2D - FairMOT tool](../../../../src/opendr/perception/object_tracking_2d/fair_mot/object_tracking_2d_fair_mot_learner.py) @@ -259,13 +270,13 @@ whose documentation can be found here: [Deep Sort docs](../../../../docs/referen 2. You are then ready to start a 2D object tracking node: 1. Deep Sort node ```shell - rosrun perception object_tracking_2d_deep_sort.py + rosrun perception object_tracking_2d_deep_sort_node.py ``` The following optional argument is available for the Deep Sort node: - `-n --model_name MODEL_NAME`: name of the trained model (default=`deep_sort`) 2. FairMOT node ```shell - rosrun perception object_tracking_2d_fair_mot.py + rosrun perception object_tracking_2d_fair_mot_node.py ``` The following optional argument is available for the FairMOT node: - `-n --model_name MODEL_NAME`: name of the trained model (default=`fairmot_dla34`) @@ -293,7 +304,7 @@ Make sure to change the default input topic of the tracking node if you are not ### Panoptic Segmentation ROS Node -You can find the panoptic segmentation ROS node python script [here](./scripts/panoptic_segmentation_efficient_ps.py) to inspect the code and modify it as you wish to fit your needs. +You can find the panoptic segmentation ROS node python script [here](./scripts/panoptic_segmentation_efficient_ps_node.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [panoptic segmentation tool](../../../../src/opendr/perception/panoptic_segmentation/efficient_ps/efficient_ps_learner.py) whose documentation can be found [here](../../../../docs/reference/efficient-ps.md) and additional information about Efficient PS [here](../../../../src/opendr/perception/panoptic_segmentation/README.md). @@ -304,7 +315,7 @@ and additional information about Efficient PS [here](../../../../src/opendr/perc 2. You are then ready to start the panoptic segmentation node: ```shell - rosrun perception panoptic_segmentation_efficient_ps.py + rosrun perception panoptic_segmentation_efficient_ps_node.py ``` The following optional arguments are available: @@ -323,7 +334,7 @@ and additional information about Efficient PS [here](../../../../src/opendr/perc ### Semantic Segmentation ROS Node -You can find the semantic segmentation ROS node python script [here](./scripts/semantic_segmentation_bisenet.py) to inspect the code and modify it as you wish to fit your needs. +You can find the semantic segmentation ROS node python script [here](./scripts/semantic_segmentation_bisenet_node.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [semantic segmentation tool](../../../../src/opendr/perception/semantic_segmentation/bisenet/bisenet_learner.py) whose documentation can be found [here](../../../../docs/reference/semantic-segmentation.md). #### Instructions for basic usage: @@ -333,7 +344,7 @@ The node makes use of the toolkit's [semantic segmentation tool](../../../../src 2. You are then ready to start the semantic segmentation node: ```shell - rosrun perception semantic_segmentation_bisenet.py + rosrun perception semantic_segmentation_bisenet_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -360,7 +371,7 @@ On the table below you can find the detectable classes and their corresponding I A ROS node for performing landmark-based facial expression recognition using the pretrained model PST-BLN on AFEW, CK+ or Oulu-CASIA datasets. -You can find the landmark-based facial expression recognition ROS node python script [here](./scripts/landmark_based_facial_expression_recognition.py) to inspect the code and modify it as you wish to fit your needs. +You can find the landmark-based facial expression recognition ROS node python script [here](./scripts/landmark_based_facial_expression_recognition_node.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's landmark-based facial expression recognition tool which can be found [here](../../../../src/opendr/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/progressive_spatio_temporal_bln_learner.py) whose documentation can be found [here](../../../../docs/reference/landmark-based-facial-expression-recognition.md). @@ -371,7 +382,7 @@ whose documentation can be found [here](../../../../docs/reference/landmark-base 2. You are then ready to start the landmark-based facial expression recognition node: ```shell - rosrun perception landmark_based_facial_expression_recognition.py + rosrun perception landmark_based_facial_expression_recognition_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -392,7 +403,7 @@ whose documentation can be found [here](../../../../docs/reference/landmark-base A ROS node for performing skeleton-based human action recognition using either ST-GCN or PST-GCN models pretrained on NTU-RGBD-60 dataset. The human body poses of the image are first extracted by the lightweight OpenPose method which is implemented in the toolkit, and they are passed to the skeleton-based action recognition method to be categorized. -You can find the skeleton-based human action recognition ROS node python script [here](./scripts/skeleton_based_action_recognition.py) to inspect the code and modify it as you wish to fit your needs. +You can find the skeleton-based human action recognition ROS node python script [here](./scripts/skeleton_based_action_recognition_node.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's skeleton-based human action recognition tool which can be found [here for ST-GCN](../../../../src/opendr/perception/skeleton_based_action_recognition/spatio_temporal_gcn_learner.py) and [here for PST-GCN](../../../../src/opendr/perception/skeleton_based_action_recognition/progressive_spatio_temporal_gcn_learner.py) whose documentation can be found [here](../../../../docs/reference/skeleton-based-action-recognition.md). @@ -404,7 +415,7 @@ whose documentation can be found [here](../../../../docs/reference/skeleton-base 2. You are then ready to start the skeleton-based human action recognition node: ```shell - rosrun perception skeleton_based_action_recognition.py + rosrun perception skeleton_based_action_recognition_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -426,7 +437,7 @@ whose documentation can be found [here](../../../../docs/reference/skeleton-base A ROS node for performing human activity recognition using either CoX3D or X3D models pretrained on Kinetics400. -You can find the video human activity recognition ROS node python script [here](./scripts/video_activity_recognition.py) to inspect the code and modify it as you wish to fit your needs. +You can find the video human activity recognition ROS node python script [here](./scripts/video_activity_recognition_node.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's video human activity recognition tools which can be found [here for CoX3D](../../../../src/opendr/perception/activity_recognition/cox3d/cox3d_learner.py) and [here for X3D](../../../../src/opendr/perception/activity_recognition/x3d/x3d_learner.py) whose documentation can be found [here](../../../../docs/reference/activity-recognition.md). @@ -437,7 +448,7 @@ The node makes use of the toolkit's video human activity recognition tools which 2. You are then ready to start the video human activity recognition node: ```shell - rosrun perception video_activity_recognition.py + rosrun perception video_activity_recognition_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -460,7 +471,7 @@ You can find the corresponding IDs regarding activity recognition [here](https:/ ### 2D Object Detection GEM ROS Node -You can find the object detection 2D GEM ROS node python script [here](./scripts/object_detection_2d_gem.py) to inspect the code and modify it as you wish to fit your needs. +You can find the object detection 2D GEM ROS node python script [here](./scripts/object_detection_2d_gem_node.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [object detection 2D GEM tool](../../../../src/opendr/perception/object_detection_2d/gem/gem_learner.py) whose documentation can be found [here](../../../../docs/reference/gem.md). @@ -480,7 +491,7 @@ whose documentation can be found [here](../../../../docs/reference/gem.md). 4. You are then ready to start the object detection 2d GEM node: ```shell - rosrun perception object_detection_2d_gem.py + rosrun perception object_detection_2d_gem_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -505,8 +516,9 @@ whose documentation can be found [here](../../../../docs/reference/gem.md). A ROS node for performing hand gesture recognition using a MobileNetv2 model trained on HANDS dataset. The node has been tested with Kinectv2 for depth data acquisition with the following drivers: https://github.com/OpenKinect/libfreenect2 and https://github.com/code-iai/iai_kinect2. -You can find the RGBD hand gesture recognition ROS node python script [here](./scripts/rgbd_hand_gesture_recognition.py) to inspect the code and modify it as you wish to fit your needs. -The node makes use of the toolkit's [hand gesture recognition tool](../../../../src/opendr/perception/multimodal_human_centric/rgbd_hand_gesture_learner/rgbd_hand_gesture_learner.py) whose documentation can be found [here](../../../../docs/reference/rgbd-hand-gesture-learner.md). +You can find the RGBD hand gesture recognition ROS node python script [here](./scripts/rgbd_hand_gesture_recognition_node.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's [hand gesture recognition tool](../../../../src/opendr/perception/multimodal_human_centric/rgbd_hand_gesture_learner/rgbd_hand_gesture_learner.py) +whose documentation can be found [here](../../../../docs/reference/rgbd-hand-gesture-learner.md). #### Instructions for basic usage: @@ -514,7 +526,7 @@ The node makes use of the toolkit's [hand gesture recognition tool](../../../../ 2. You are then ready to start the hand gesture recognition node: ```shell - rosrun perception rgbd_hand_gesture_recognition.py + rosrun perception rgbd_hand_gesture_recognition_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -533,8 +545,9 @@ The node makes use of the toolkit's [hand gesture recognition tool](../../../../ ### Audiovisual Emotion Recognition ROS Node -You can find the audiovisual emotion recognition ROS node python script [here](./scripts/audiovisual_emotion_recognition.py) to inspect the code and modify it as you wish to fit your needs. -The node makes use of the toolkit's [audiovisual emotion recognition tool](../../../../src/opendr/perception/multimodal_human_centric/audiovisual_emotion_learner/avlearner.py), whose documentation can be found [here](../../../../docs/reference/audiovisual-emotion-recognition-learner.md). +You can find the audiovisual emotion recognition ROS node python script [here](./scripts/audiovisual_emotion_recognition_node.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's [audiovisual emotion recognition tool](../../../../src/opendr/perception/multimodal_human_centric/audiovisual_emotion_learner/avlearner.py), +whose documentation can be found [here](../../../../docs/reference/audiovisual-emotion-recognition-learner.md). #### Instructions for basic usage: @@ -543,7 +556,7 @@ The node makes use of the toolkit's [audiovisual emotion recognition tool](../.. 3. You are then ready to start the face detection node ```shell - rosrun perception speech_command_recognition.py + rosrun perception speech_command_recognition_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -565,8 +578,10 @@ The node makes use of the toolkit's [audiovisual emotion recognition tool](../.. A ROS node for recognizing speech commands from an audio stream using MatchboxNet, EdgeSpeechNets or Quadratic SelfONN models, pretrained on the Google Speech Commands dataset. -You can find the speech command recognition ROS node python script [here](./scripts/speech_command_recognition.py) to inspect the code and modify it as you wish to fit your needs. -The node makes use of the toolkit's speech command recognition tools: [EdgeSpeechNets tool](../../../../src/opendr/perception/speech_recognition/edgespeechnets/edgespeechnets_learner.py), [MatchboxNet tool](../../../../src/opendr/perception/speech_recognition/matchboxnet/matchboxnet_learner.py), [Quadratic SelfONN tool](../../../../src/opendr/perception/speech_recognition/quadraticselfonn/quadraticselfonn_learner.py) whose documentation can be found here: +You can find the speech command recognition ROS node python script [here](./scripts/speech_command_recognition_node.py) to inspect the code and modify it as you wish to fit your needs. +The node makes use of the toolkit's speech command recognition tools: +[EdgeSpeechNets tool](../../../../src/opendr/perception/speech_recognition/edgespeechnets/edgespeechnets_learner.py), [MatchboxNet tool](../../../../src/opendr/perception/speech_recognition/matchboxnet/matchboxnet_learner.py), [Quadratic SelfONN tool](../../../../src/opendr/perception/speech_recognition/quadraticselfonn/quadraticselfonn_learner.py) +whose documentation can be found here: [EdgeSpeechNet docs](../../../../docs/reference/edgespeechnets.md), [MatchboxNet docs](../../../../docs/reference/matchboxnet.md), [Quadratic SelfONN docs](../../../../docs/reference/quadratic-selfonn.md). #### Instructions for basic usage: @@ -576,7 +591,7 @@ The node makes use of the toolkit's speech command recognition tools: [EdgeSpeec 2. You are then ready to start the face detection node ```shell - rosrun perception speech_command_recognition.py + rosrun perception speech_command_recognition_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -602,7 +617,7 @@ EdgeSpeechNets currently does not have a pretrained model available for download A ROS node for performing 3D object detection Voxel using PointPillars or TANet methods with either pretrained models on KITTI dataset, or custom trained models. -You can find the 3D object detection Voxel ROS node python script [here](./scripts/object_detection_3d_voxel.py) to inspect the code and modify it as you wish to fit your needs. +You can find the 3D object detection Voxel ROS node python script [here](./scripts/object_detection_3d_voxel_node.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [3D object detection Voxel tool](../../../../src/opendr/perception/object_detection_3d/voxel_object_detection_3d/voxel_object_detection_3d_learner.py) whose documentation can be found [here](../../../../docs/reference/voxel-object-detection-3d.md). @@ -613,7 +628,7 @@ whose documentation can be found [here](../../../../docs/reference/voxel-object- 2. You are then ready to start the 3D object detection node: ```shell - rosrun perception object_detection_3d_voxel.py + rosrun perception object_detection_3d_voxel_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -634,7 +649,7 @@ A ROS node for performing 3D object tracking using AB3DMOT stateless method. This is a detection-based method, and therefore the 3D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids. The predicted tracking annotations are split into two topics with detections and tracking IDs. -You can find the 3D object tracking AB3DMOT ROS node python script [here](./scripts/object_tracking_3d_ab3dmot.py) to inspect the code and modify it as you wish to fit your needs. +You can find the 3D object tracking AB3DMOT ROS node python script [here](./scripts/object_tracking_3d_ab3dmot_node.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's [3D object tracking AB3DMOT tool](../../../../src/opendr/perception/object_tracking_3d/ab3dmot/object_tracking_3d_ab3dmot_learner.py) whose documentation can be found [here](../../../../docs/reference/object-tracking-3d-ab3dmot.md). @@ -645,7 +660,7 @@ whose documentation can be found [here](../../../../docs/reference/object-tracki 2. You are then ready to start the 3D object tracking node: ```shell - rosrun perception object_tracking_3d_ab3dmot.py + rosrun perception object_tracking_3d_ab3dmot_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -669,7 +684,7 @@ whose documentation can be found [here](../../../../docs/reference/object-tracki A ROS node for performing heart anomaly (atrial fibrillation) detection from ECG data using GRU or ANBOF models trained on AF dataset. -You can find the heart anomaly detection ROS node python script [here](./scripts/heart_anomaly_detection.py) to inspect the code and modify it as you wish to fit your needs. +You can find the heart anomaly detection ROS node python script [here](./scripts/heart_anomaly_detection_node.py) to inspect the code and modify it as you wish to fit your needs. The node makes use of the toolkit's heart anomaly detection tools: [ANBOF tool](../../../../src/opendr/perception/heart_anomaly_detection/attention_neural_bag_of_feature/attention_neural_bag_of_feature_learner.py) and [GRU tool](../../../../src/opendr/perception/heart_anomaly_detection/gated_recurrent_unit/gated_recurrent_unit_learner.py), whose documentation can be found here: [ANBOF docs](../../../../docs/reference/attention-neural-bag-of-feature-learner.md) and [GRU docs](../../../../docs/reference/gated-recurrent-unit-learner.md). @@ -681,7 +696,7 @@ The node makes use of the toolkit's heart anomaly detection tools: [ANBOF tool]( 2. You are then ready to start the heart anomaly detection node: ```shell - rosrun perception heart_anomaly_detection.py + rosrun perception heart_anomaly_detection_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -710,10 +725,11 @@ which is intended to be used with the [2D object tracking nodes](#2d-object-trac You can create an instance of this node with any `DatasetIterator` object that returns `(Image, Target)` as elements, to use alongside other nodes and datasets. +You can inspect [the node](./scripts/image_dataset_node.py) and modify it to your needs for other image datasets. To get an image from a dataset on the disk, you can start a `image_dataset.py` node as: ```shell -rosrun perception image_dataset.py +rosrun perception image_dataset_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -721,8 +737,6 @@ The following optional arguments are available: - `-f or --fps FPS`: data fps (default=`10`) - `-d or --dataset_path DATASET_PATH`: path to a dataset (default=`/MOT`) - `-ks or --mot20_subsets_path MOT20_SUBSETS_PATH`: path to MOT20 subsets (default=`../../src/opendr/perception/object_tracking_2d/datasets/splits/nano_mot20.train`) - -You can inspect [the node](./scripts/image_dataset.py) and modify it to your needs for other image datasets. ### Point Cloud Dataset ROS Node @@ -732,10 +746,11 @@ as well as the [3D object tracking node](#3d-object-tracking-ab3dmot-ros-node). You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements, to use alongside other nodes and datasets. +You can inspect [the node](./scripts/point_cloud_dataset_node.py) and modify it to your needs for other point cloud datasets. To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as: ```shell -rosrun perception point_cloud_dataset.py +rosrun perception point_cloud_dataset_node.py ``` The following optional arguments are available: - `-h or --help`: show a help message and exit @@ -743,5 +758,3 @@ The following optional arguments are available: - `-f or --fps FPS`: data fps (default=`10`) - `-d or --dataset_path DATASET_PATH`: path to a dataset, if it does not exist, nano KITTI dataset will be downloaded there (default=`/KITTI/opendr_nano_kitti`) - `-ks or --kitti_subsets_path KITTI_SUBSETS_PATH`: path to KITTI subsets, used only if a KITTI dataset is downloaded (default=`../../src/opendr/perception/object_detection_3d/datasets/nano_kitti_subsets`) - -You can inspect [the node](./scripts/point_cloud_dataset.py) and modify it to your needs for other point cloud datasets. From e2abdaa4bb86a553c06d5579f5fd9d479209e71e Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 7 Dec 2022 11:01:05 +0200 Subject: [PATCH 49/57] Fixed list numbers --- projects/opendr_ws/README.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/projects/opendr_ws/README.md b/projects/opendr_ws/README.md index ad5608207a..465931b8e5 100755 --- a/projects/opendr_ws/README.md +++ b/projects/opendr_ws/README.md @@ -74,11 +74,11 @@ Currently, apart from tools, opendr_ws contains the following ROS nodes (categor 4. [Face Recognition](src/opendr_perception/README.md#face-recognition-ros-node) 5. [2D Object Detection](src/opendr_perception/README.md#2d-object-detection-ros-nodes) 6. [2D Object Tracking](src/opendr_perception/README.md#2d-object-tracking-ros-nodes) -8. [Panoptic Segmentation](src/opendr_perception/README.md#panoptic-segmentation-ros-node) -9. [Semantic Segmentation](src/opendr_perception/README.md#semantic-segmentation-ros-node) -10. [Landmark-based Facial Expression Recognition](src/opendr_perception/README.md#landmark-based-facial-expression-recognition-ros-node) -11. [Skeleton-based Human Action Recognition](src/opendr_perception/README.md#skeleton-based-human-action-recognition-ros-node) -12. [Video Human Activity Recognition](src/opendr_perception/README.md#video-human-activity-recognition-ros-node) +7. [Panoptic Segmentation](src/opendr_perception/README.md#panoptic-segmentation-ros-node) +8. [Semantic Segmentation](src/opendr_perception/README.md#semantic-segmentation-ros-node) +9. [Landmark-based Facial Expression Recognition](src/opendr_perception/README.md#landmark-based-facial-expression-recognition-ros-node) +10. [Skeleton-based Human Action Recognition](src/opendr_perception/README.md#skeleton-based-human-action-recognition-ros-node) +11. [Video Human Activity Recognition](src/opendr_perception/README.md#video-human-activity-recognition-ros-node) ## RGB + Infrared input 1. [End-to-End Multi-Modal Object Detection (GEM)](src/opendr_perception/README.md#2d-object-detection-gem-ros-node) ## RGBD input From 4df42a6f1e82575f98d9c6cfebc1da61c3cc487d Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 7 Dec 2022 12:07:36 +0200 Subject: [PATCH 50/57] Merge clean-up --- .../src/opendr_ros_bridge/include/opendr_ros_bridge/.keep | 0 projects/opendr_ws/src/opendr_ros_bridge/msg/OpenDRPose2D.msg | 0 .../opendr_ws/src/opendr_ros_bridge/msg/OpenDRPose2DKeypoint.msg | 0 3 files changed, 0 insertions(+), 0 deletions(-) delete mode 100644 projects/opendr_ws/src/opendr_ros_bridge/include/opendr_ros_bridge/.keep delete mode 100644 projects/opendr_ws/src/opendr_ros_bridge/msg/OpenDRPose2D.msg delete mode 100644 projects/opendr_ws/src/opendr_ros_bridge/msg/OpenDRPose2DKeypoint.msg diff --git a/projects/opendr_ws/src/opendr_ros_bridge/include/opendr_ros_bridge/.keep b/projects/opendr_ws/src/opendr_ros_bridge/include/opendr_ros_bridge/.keep deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/projects/opendr_ws/src/opendr_ros_bridge/msg/OpenDRPose2D.msg b/projects/opendr_ws/src/opendr_ros_bridge/msg/OpenDRPose2D.msg deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/projects/opendr_ws/src/opendr_ros_bridge/msg/OpenDRPose2DKeypoint.msg b/projects/opendr_ws/src/opendr_ros_bridge/msg/OpenDRPose2DKeypoint.msg deleted file mode 100644 index e69de29bb2..0000000000 From 7f94fc5905d79c791d3204664b2bd2a7cb9e534c Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Wed, 7 Dec 2022 14:08:45 +0200 Subject: [PATCH 51/57] Added a new notes item with a node diagram and explanation --- .../opendr_ws/images/opendr_node_diagram.png | Bin 0 -> 47278 bytes .../opendr_ws/src/opendr_perception/README.md | 12 +++++++++++- 2 files changed, 11 insertions(+), 1 deletion(-) create mode 100644 projects/opendr_ws/images/opendr_node_diagram.png diff --git a/projects/opendr_ws/images/opendr_node_diagram.png b/projects/opendr_ws/images/opendr_node_diagram.png new file mode 100644 index 0000000000000000000000000000000000000000..6948a1f1b9dadb65e1dd43a65d58f591fcd6ccd4 GIT binary patch literal 47278 zcmeEucQ}{r|M#b*rKBhkN-B}0j8Z6~K23XNBqNcPy`_|-l!h{rl}$qSC?QEU30c`P zviI{ktNXs6=kMpA=Qy5UN5}Vj3!m${&htH9<9%LtWX_#fvx<2Yg+f_FJ$YP?LRrp0 zp)7j2f(HL`p& zGqEr*wH#cMD@LJgrBII_Q?L!_Z?dsf(3@XasQVnYe#P~2pLJV8$de{-J3 zzNcO78J(9pAI~&)o$0)3q-ocjTFRue6!-P_kJRf;Hp~C{fx=f7W2vN&Y^`;->O>m2`mD`5r0qJO`#m3q12-!Jlb7EAs61x53JF7?l1{LcdZ zvl5j535);9t3Sc;|6jHE6p?oL&$G`zi`UGKj!{n7Xd_H%dAefNDxXq)o&ggI#Z3OI z7yqlY!+%QbyzOKNYu;eJ=tN^^!QD-p@Y9sE-I0H5bHV$%I%U4BAvgQ6Ud2X@MXvtQ z@)75v<-$*y+bp5D`rrCn-Wl$;v|Ox7(9#;~tK%Dpr{tZj$*?kd!X+D$mbnOD>F^D# z{Vzc$$NukXe+$iyUAv6NdTYL=nrfNbETd3bHc9^N60TbAQSuR=^pkqxGY7Z_;%&#i z871`GKRh5aH$8SYlLvQ^%k)nZndQzJtG(b8HkdOL|IDE+u&U=Vr(xh-du%VLS z-T&V8uJbRd=jUd2P^(^@eZ(1f&|D%$DXtP%s(yKFNt0byAh&GDP0lM*zdA%LI=wDk zx@0*yq#Y>cq-OjrSvKNapl(uqto8YTgTXa1$~@b)9ZNB)QyTo9&B(yOspB<26131= z8GhQZF)i_}alE>I*K zo6_?&QO8f%^hM!))|m014b5#lKoUbG+y3s~(ZmaHbnkE$mzAB@$gr&V{QQ8RWw%U6 zM~7;|aVaTsyPkLM$dKiZf$Mt ztxM1fo--Emf8j82DErqZo=+tu@(c5GV^dv04nJ)s2AWp$mXwz6AlJ_2p|v9@heC0^ zB-9X?pDAbIJYAwr$R>=L*d?I*aUJyyH{*^d&1`$z$3xE2=Lb&Q-NaWL zuaP-A+3sD!6ng3jcNCW6P5o67tKP@A*0T1}?>{Antz_J|ai@Rjt2#}GvzNaFRAkwW z#O_PP=R_;UCE6~aByxQ|&8nczP_!u68XaGMl1hU!y|04ntt zKEK!Zv`?~CpHfY<;%UCz**BWDt;>uX(_~k!Tv?lgK&u*P%IvTdQB2Z5yMvR{;M3#X z$N|rk(z|>)kv=<4pYq9>>ST$#kb2H{uZ9ojp)G6IY&j^Cq+eS3>a2hDTjLAVN_@cB z{LJ8mqOU1NzFz#e&8O!A4$5|KVHPwDGj4cW+-lb7#)FlU$@(YxjwY(Tz4}~{pJ~&k z2+7-Pi#x7GT`KTYQoL9fuc7DlFu%3@-HqitI_S=Si@P9P|I>PEdb$^Z{(zHJ#PVcA zs%h#sm;S^e0fX|bP9vqlwQrg-ZBF|HoQqIPHrPgGWAva>sUp{YcyLCY@MgPqaLcdZ z;l%sH^7!SgXUGm-Q1}bvUaIh_r^)zKztKDWgh%;YPEL-`uY57#h^pQGr8{-JW@l$P zsfA1(otR#^Miq2cj?3n^PJA0E=od)w^>sZd7wt~BC+Lz?yd{P>-MJbfLLB@ruKaYPj zacU@^ZU=P<4egN1_wdu3=8$*O2vn=-(JQD`=W%sjfwI;|=f$EN2D6nT&Ic)7y&4-5 z5)x}AYCl?zG){J!v1UDV?fAQQ?+mLW9~#!i?xucAw}^;WPY=;A^`rVATS8e6TXSvS zE}7b7?SECsU(;z!QFOFil3a+9kukix+WJ7NI z?Y}*xIUpcli$=D+;<01L3Yr*E)S5;A?IE3$Q&T9g38d3ehHhWXo!^OH{`+S>^8a-% z^B-hz^>1%^cf(J}SPjiRmmkk+KU}a$FIC=wlJ|&T8a2|+&aO7U$I?D)Z*QxmNXNBw z5}`uz6N6)KA97|k?ZsdJQftWtP18Vc@3J!cgIm_tUwwPYS=qJi*xzLyI?Qia&Bd#g z6L4{b@aK+eR3ATqDYQur8gRMC&$g(&%xNH}c`hNI@%Z%2Omp&yix*#>dd#IQ>gMKlMAMf` zRsz*(X~1N`y1%t-_FS6!UiX_fxp;Zw(lXteJx~;5WJ9I76{8h^BqmGw3%PJ<<#_eo zzs!;45+%;eYd11()W3Xrlid;jQvGGr>({T>(S|Nwyf{{do0IcLM(dNHAgy4t&vkXF zAIoDSJH$1VUW*LXZM0>K_1L3gtf~;%%WE+9_UApCG<>Y=sZ*Z>Q;R=+dgInX(3#!d zwYS?df_3h!-=Pq#usgxKrKLrp#L>~?z6_H%BJKAdS8)=6s466r>*gl#zwU1DM@L5` z6ivm%#H45h22Rw!(PP}aSyi`Ne&3!whUvMv3kiDzW_Zy#QQJv*9~{^f)P+W-PY*{Y~rE$#h>%wd|=}wQ{8_Dp354X_QcNno;OUt#xq_thc}tBck|XQ?*04Ue0*I#mxaf}dK5Tz=JA!ansFUGsDje9V8v3% zq^GB6x715KZuIQw(^c-PHzn&7t@SQ5v$xNvId}A^Yu)Qa_4I4e3Q<1q-@j+rxbYRx zNpW-Yg1-9V#;yO<14DMXRVw0?ctaJUP)ATUUSMDS58 z>&nHh&55qA6shCKxi~o5ZoSR5AB*V+6tJ)022vrUnuMmEUF~Qrz z!=w4LqhG*1l z?0tT|VwBvDWi&J!N9rk)yuU_9aXxI7P z(VnVnB7q3rRdjTz1M=jfEG?7KUxI;`BC+nYclP|Z;tPC;W1yH6w|uv13N?LZ8Y%^LffX_K&) z?MUS6O?yv%qFd9X70~Q7(Nyc(g=VT;?6c!;`UGVt*W%!oX!Z0r+{*E>wQ(0VsSR3+ zSap8h+~>{HxDc=Ys6mHP0sg9K0PVnylx=O(GOT|Z z^4f?5ir9}vzBOrz22w2zJgkge#)`jk(7E;=0YhnKi`n@Y%CSy^^O_ zkUwhK_9<_Yl4|gb1%-u%Z6EG<-n*x$pipgUeNq&-7L|oiDgX{P$Dw0XIn)m4$=2n4 zbP{57qw?UbG1$r8`lN`qwl)S9mYJC6cZcgLR{!0}quUCZjGHnPwVcMDkO)L>MIv&^ z&jaU1+-E%;ty}C<&~%oPSElermul5rehh&9!DE+PaT2tATJZpceWk+93NkVeP-9-D zr5!9ND2TdR|E8c;%OR0<^PelFNmW)=rPvN%Dkv)2WTWi5a0)jnHrcX-ka%pGyrW|l z&Dt%Ib2AftAMR|hUznQ!nzW(cIlt(~j~@o0uHa*&W9IRK+7i%yk4;n!4NVE@4Ad9? z;K4cqqdh%5%C~zT3BiLfoL=|`3k@Zmd2vJ)oeSUQA{@Ot)H~W{@H>_=d*Cg?0ErQ> zW$kS&!T9RYwYew2pevsrxIEuwy_QuZEF#ERjh>z!z`=*jc~TqkZU~Nsw0PWG9hGnP zYT*V*fI1_OQnjy^vt3n#7?y_I(>)g9@730z^072)m`M>4jw>A>ypzW)T zeNo}bO%q*!baFPf_(K+*Cw*QxPhSx!083Vh@Ac(a<`bNh+=3TO%p z%*@e5lIElew83Cr3KLMLE8pzoD(7I6T&9vvCUMfee&=;(v>8#ai3(@MPL zE@IjJ@>fSm3|5EvpxMj&TMo(TYMB|MI9C!GYPb9=E=VM&efKIND8YfF8ccIFf*gO{ zLE=xBSbh8x!VEXc%Vm-bDni-@3I|U?UO02|UW-Z znSL~d-}kEO$&$*R(-Co*&l*#8=?V8Wv7cG8d~Mc_i9+-C!lqGclu(lt)z?kn`}TW$ zbrBhZiGv>>ZeMLIk2FmJj?nSqUzjR$SrzB2ODY4tM91U1mfd6`Y*ZJYZPLu{IP^|p z2D(m-j=!kG_%mPyhHqb*o12+*JdezuPbh*ybT7Y^D|z4sofubj#N{BTw;%h`Z3Zr0 zJMWtWwAJ)Z^SxR*SJ9+b$Hk~5h8!|)yUAbp4Ak=d*Pr7O4L5Jze0JdS7xjtkf_sN- z1_GY%({=-U^1I{d$^FYIhw+)@Umh>K=fHugXr=V77Y>{q%Gb{XiD@{D_g5IFHoZE* zW=HC=PgwYAUqf(Lk}Qb&2vXe~CI*=IU3BdqUtl@HvwJrcO}G{@U7cpGgU0;XupP{^ zB2lNP8f}9pGWdB#=v7wes&=Ess8rJ+tcvN^^ni}-VlO~2d28AnkHXPQlrd3v-?3mH96a?e1+5+$ZRP{@f3p! z32(9K$fEK`$cL(~AeZ>$WC1Cp;ZVyv!^*HbDbB-f1!@_VQNVYWxFwR)Y{uvP1gPTk zlcB)EH7Q2Q&_v7HCnv|E(DCKu<;TYAv`+h24>U$00F26C9_#&<5(5z>YakN}RFg%S zD523v;c%4LS-GZp$*JC$1kMgRo`YsBN8?lyC4z&4SFKuAiQFYH9E-y(EUet@JQGKN z2b85~+y%ErMsm%U`lQP#)<4xq((8SCEDOz&Bt2H(LIY$@2_gTpKO0h&0d0|+DQM(a z#3%$bIQ(6n!BzzG!O8L9?UKo9eK= z3Ypf%s9^)Q1~#Bbl|SAsA6ti#P&*P59r&A=oj-Z9{DNur4r)gG{lk{fHF}?4ej$rI zCz6{J_@@2raM2Db@P5_Jl^c=`Yq(2;#I-CsKUYM`h9*Pfp~_|3k5y;dm};b%9YdW} z1ZP0zD(a?a9|J#SOPHITRRC+~OW~J>YNmMY+8e+q0sBz{_gl9NfqH6ZCQ;0ha~ic` zBVA?6YHEJ{)|p6Lu!R@)y}J(uo`VJ~yJ6><@-XV7WB@ABnf@f^Bet&>FIlpK+U&BB zt?4pn6~j9*xR*LzBt9Pu^{NC4t7-1s+nniM9$t+M1^HKJD!MB|f8%3lBWQIQ3-dEI zi8?3FKHGQf#QiN5x$|Rub8~|(69?*mdF=xgrxtMHN3N*2?sU{If8{iG|OpYa1k)#2iV zHPmUjXbk2q7}dheX>_WEg^S4<4eGy(BbwX9#K&ZcBj#YP>&t%*=!V!hQ4S& zklsaBD|_O*LvM3#F3F$lkuQf+#}`)Rp1iv9ckvzCpuqLrG55H8n^8ixzT%eH3#ng_ z@!2whZV3qqs?ePI3z_^&m2^`iQH9Tf7@($*?(|WJtdNe^#3>(iWYxJ0;LQY8Rau3C zoB6H#&P7zIZDL_Df?_1E%Yrc9$Vwni@8`K0Okad>XJY5Q@VV$L`$lqa6RNx>_NjMh`hHT*x%?i6_Cfu?!fibP3%N8G|73FU<*NU0q#U zh3BXDlZ==Kp?d^DTx_~~dwW;cWzOf=r|)Xw&a<{LhNN{AGUCtTHua0{3P#4wgGo;?nbmFrYA=jB7 z+|n6n^Br57kSY z;^${32FapSp5{|6K?M}$X1M(MnYNHgqrv_I2jX&e?b_v!;HYc2U&}7m`1Pt#dtaXs z@A>oRg`wB6ImHpF{XFV)TnD?$Y-0UeWB;PLPfqtx>GXM71PmVFhL1k%FJ|I(_lj(5tA>jTft=ycRx`bxlInno9{!SyLX_JrTEQZx} z)=!|KJXjLS7H4sE-lx)1qC`IS^;PcX>u6HEgWgp&)S7SA|COJ#6sfnv5#n>yy}mK! zj^+b(4R4LB(4$Fjy#l6Jo8k7v+XOJe{V4=(Na*Dv;}5n;t{SUrH9Pq!NI2TGIp@@= zQ%UmHp^ zYZY)x#T3JuSR*AU4x+;!HYgp25Lkf))Af>t;}q8c3R;6!jkRaou)(6#=m;7;;l8|T z$*;fEXlzy+nutPOMTQX<1KR_GL*8e!!&j^KyTc$Hueitl{ybFB#4bVaI}4hqxx@~M zQ%#ayy>U;PUXgYH93Aqfah-2TIPR6|LvS^DJ}pac*278(In&Y86N3p;BVEL&0OA|{ zx&T6=P~gd^`0C7Ny~~%4=4Yo)ojse9^U})Y`k~8Tw*35>C<)_)z@L;{{E?HB8--;8 zOkm;@2KCGkDeLoVNfwb&_Ut(W_N1t!r1Y@Me6(AZaDwO0pVz*Q!!|%O_{DVJ7_J^c zVt}HgbAg@r-PNSpkJTs(TKB714Sv@^l~n*zD0*!J+*d`sH^lT6v~(CJZ{xMVcniJR z6Yx{r?wd3paGr&~1}^CnwqNg)L1UUZY2h3BG-H8XX8R}lx|oEnJ}Y_<5)U2 z5X@N?B%VNhwDSsBeQgHht>(&{V}$UJu{9!wQ0$%pr&+=ZhkmUN<+u)2D*{;alS!)4 z`NIp6>8$%W6( zJSpk$$!~oS6cmJ9wwn5-N7f6BpaOL@wxya?f8&uLSl&E4c1X!Nt4!x|Grd1GC26Ob zWh)Eku(vA0`#!sPJp2};ImlvA0bm9Qz+I+J<$ueVQAg191vE7fat^4)ENl|Mc?FvQ z%RZ>cEN)=vi++;b&1BF5#FV0hgj=+7LVSZs=2@WVz%N1KrppR^M(c9t6@5nQF9)fX zI665!@%4>>tU5N(bOCfr3B5mV42X{z9RTok2Nj$pgcAViEV0WtsRWK~+^ZgmRkc%3 zCFbWZTGv#N1E@|$Xs3X$PlQar$09s??g@XFQBirOkFNAH2x1Qr%yMTo?N#rtaDEr8 zY(4_H)F97w5iw-p=-HgU|Bs*H%JPfcpVnV=g5kLL$Zw}1kj1G4meEY25j0VYD+ zRNf^W9RkE*WEj9PuN5L4=Y+pcGWuzs9T47cF{h`pK+PJ}DNxiwNzOU^oPxq*&L|jg zP{S01Ex4eGD3*fjT~*vr_~C=C!mjD5sVI$1YoA|Tp76xz^t^^6DTI87&F=fCv0E;~ z@=EVW>T;!{wEjS%XXKffu;Ij`$zZx4l-Jod9e*{2#VH7shvRfz5ng^Xx9P&o{MM!7G%jfD=; z&QE7}?6A$iV}5D$qNGmlE#QVE_OWm(@4QR(<)J+3L2v@$BeDo#9WO*3j?BMo0b)`b zn}F7INs(7tN=kJiH$)wd9o>;3*D;Yjdp`S>?BnN`y}OZDPF=lGuxt9U?lSX%=mRzl z)|ncJBH|$KJoD`3t{1ivP>2)~G_#M4+uXL0bDpfpb#dvNAMX-hNG~5>SmWa2GBq<( zVv^d&=Q4Fy)4pd%=`(&wLN&l7pyozEa-&WK*8{^MBN$+B0@G7{08!#TiP^0CX(;C;(o`ft8YKHA_h}goc1={$R=|EL8lT#BT$Su;<2;&HwJsOG7p8i$B zw}X2A$ned9jK00u&?dfR*(H&CfV^&q0!>)P6W9-^<3`Y7Ytw1jj;Lrk4!d*qpi|TU zjJ|}Q26eVv#MylnW3ow;2(_~K(XQW8U_n8TsmqK%oYK*}C3GKqEPD~!HK<}2mxZG( z*Uk*)CZL|pb?~|Dwrqkv69-GU&b|p8moRDcTc0rDW=Huls5oVg!owDvJ9lc1hz;fb zGwzeO2A+~<*z764ZDYo z4TWQDL=JI8#Pj*_)Ld1`fQ?rDL2s3ZUdUC8#~O)M-iXk^`J`n90^3aAM&%<+IV zwRUvO0w*yt@RW0|=e$d0@|7x+Nyqpd92^n|ZRG^blz9r6`Cj6uH2h?%(?*|F+R@tX zEpC!s2MDEP`wSHi?)in*1cYUiNhS; zet@8hYyZv;H$DKkpvnP60C5o=$f$|1aWt;(bjH}|Ls0pM>E`QuS#j_5^t8P~L3*x= zU?CYufwDpp!mPaLcfGu-$yCVfq&cgoy@I#s__LHRb9%{~z|-Z?*2RAZPw-G@(8{YJmV|qG z(Sn&2`_;v@Y^s|yHoLtRYEH##?R*Ay_5`9L%}x$0J>#MdW{up3mhmn5N;|z}ODBZn zyNpAbw-4<-g=dW`FVA=AiDC)MVk#h} zbMAKeQb%kopPb3I)gTfm-f9-y%sR-SUT;MX_c8~%+mo}+y8iYVzehB=+q~6H*1KOY{7jC96~~5SO?#&b#f#CpK>8iP0*;p zWnredb_CXnYMDQz+@{bbNt?ZD$tMxA@SGtuW|@c+JH(TbhntV@JRS=z26BKr*bp*~ zfZ@Enqi9@sPNR=~Sa7sX{3T@-kI7~sURYGbg9#R5T6lW;@Jnx(_9_Pq+d;KyG`~$M zcORXj$&c&=hOj?*P)&@ULot}2pReu1YWuT0 zPrkF~EnZ4PW8V+em5m4uoK?|^JTc+}W5_rm;uvTi-m=zPCoOmW8ea^2?;2@XVD~sT zY}}~N%cGT(Rg?oq{`CuClg6)v^!JvXFFUkvUpXz5+x8S7%Q1VXGOvJNYK<_Ev|-)4 zizyRTeab^t=shOIGZ{v8d$$+ILe+noyl_G;SYlb<duXGjl!gv;~B#>Tz@|0hrM zFiW6MLp)?hK7Wk0m>8|u=)O@qXRdo-dBOislJ?Ns2E=Soi)l@ zW8LT96+yK+3pE~AY2tu6(UQ_=AcB0{?yWKr=i#x!$O(m*s1``z342;V;CFLL}L+X_IpIL8s}_#BT=hdBCdTXAgA&7|&>GzLrNDKGQHq-&7a7Yr2 z0y>4XgfTBbW{4F6OFnYzSe#6-o8onO2K1R-L5`Jvxt~Gsi?}{Ey>hU9ck|}bm`gos z7S`!8d!UmEU`CZxqo9S^%MZJbVCsCF9pdjbmA&q^ta>m*Z zcPC^TL+CUxq0UZQf}>)Zw9akW>guG+=ZWu%zRM=rW{=2!2`S6~m;$!BGAt_CfnxBm z2mHhr(!_MU#Af;rp#K9yK=)$6-m1nm|_Ni_T_}E_$?761eBIoV#fyIo4kiGW zV(vli)2C0BzlTnh*--LEz&t)b2?Hl1)Kt#7V7j*^CazU3p~GH$q^>tKliBp0+p=26 z^i1oYa**la_l2JqpHl!DdHPsuw;wQ$ju%o}DKZXEDT5;Rl=EkO5)pfFy|=f@=6c2~ zbI<$oX}yi_f}j;Ub&w8}0qW5{GS zefsnjwi@4Ewsffh8YAY{zgQ(iRQ2qsR-PE#IpLrM1YaWt>~ZQ-(CYfR`9Xv@^V(DtsM@Uj$!1hrO`@rj{XheXIv*tV4MQMB zD^{g?mJYG3%;mty6Z3vu4>UdJTBa7tyaSTh9zEBp!8n!CMj4fTXt_LOW>CGKe%foD zUtn0+2+gT(?%Hr}YkT{445y%A5Ghcn37~5X!D9#-NgSvb7(fL%g`jwNFPIhaO)$il zf_hFUAMn-q3bpm?*N+1IUx9o{R&1oRbPOXGJP^tVV1WG^f<288*&^P55k*1beZ60a0$ z$!McR*=_(Qk{7VHBA|My&EYo^=HY$x;Z}*)MDR20&W|0}$gZlrzIYwC%&=uk>|Tuw zGKfC3V%e*6fze3t?%Amk0vAw-YqIPt$lNLM+6j7i^e7YpUk{|8ArbM01w(iy#7I+0 z&@A2JDJ%;@N6e}0>E4)ynt5VPg1kh)-8+fO#VleOIX&9b&abfD;#=Xh(DmOu<&8@GC~j9P_aRZzujFs?-Avu%Wcqgr1=5bAAfZ z0v_9icV$q}qkvgi#T-@o>Jo_Ugi*_A0Qh2)RER7Et1c4p5TjC32zVu6o0`|!lBfxX z5;QVvLlZC<^7ZvGxF&U);3y&p?w|tEtK~Xp6B85Ik0^e%DSN-tff^J4Qp%Yb7<%<9EZ$D=(ymAF+-iUK{XUgHZ&;9+6 zM*=ML0qG3t;#o-~cro#}y~=U$UE&!ln{GSI0U~rA@PzBs?7l*i00iy|fRKDpiiUEk zHc%gvjt$dVu5YO@k!hLsQE=YlU6W1oJ=0blHLwv`OweLzNyM8j+|!r>?gPy(;y*lSITH4!AzmMfB$}Y-`JXG@BMEq z@f0o;U&HkY(=dt9JNI08OAF29xWAC`I(Bw%>?n~-l~P=vr@$K3L1GP!jTI2hI)OP< z_>tbx;h2P+_wb$UN=rbWm%#|)T5R(a&1LBGfwGJ6*6Xrn{q>;2ek>_TVrjXMY(Ni< zl4Yx&SJM$}+SfOFtD)R^K^S4>ezFSA-n)@9ac_)v2%U6&YaS@Nd1gUuc1Ut9*M93q zv*Qh>WA;5lz)o1A{?3O7ZcGd|^Z8XDV4^iMH$R3%t3b2Paf~9a_U{=nam)ig%S8 zrlf%b@4J{s8SSoE0)lF6xsX(t*WAp`qZDh6ROGW*~u??FeO) zJFaa&T1tJB0CDw4Tne-n0B5E@Bo+y$;#S-XUH$S$1;o-vW=gg-o(TNhm}@i zYUaL4Q${dyzs)HycKFxO&{2DPkx~0>1N$j8;uwF$A%Lb~O#YOMtIe0b0sHyU+4*ft zgi;K@SxcUO07qdwUucB~Xhs|Q=X=!fVc6Aoc3yHA?Y<6);}T{g?!~WP@qawxbCl4? z8pCI2cKWyT@bDB&ynioowJC!Ux3Nq{BQ%Qtj)#ZL(vvL2&<}xPwkD9a(6FPw-wRd! z?l*nNmtRv&*~bT(?xJY6&P_C*u(LZv))hTB6jHE3bB-u#yXjcY0TdCNFZ1%(EYeal z0Nw;F*E40B(9$gV5_tGFn0X6^NXTcc-Ez?N&71w^_g@HHSx;64^Ads?`pPuHk3uV)$O|bTsr0lKNEmL`xpyZh zh@H*0WzmtLyd{+R_&JU`M^&T>q|zlf-ciKG#gmQd&MjqXNHMyN5v#O}451;VeT84+ zO0EiR7ZhYf8_L7>nSqE095i2yAiq&zaQx!MtEfpt20<_yCz$*BX_a=0jzX#d>19v-eqNfvR7q?RvVe(=ztR_MY-No8WI zFJHcV^?TNozVTX}So$gh0SG-yQ1V?7(HoCiJs5^z?CCXticjL__)SctL3yA--^KKn z`y}m}HOAF~`bbWFZZ57ooI*5jeZOp&G@<}8CuS_u(9qC2ccB+f#ye?7FqOGdBnUji4-UllQ0@rmEq+!t`N49+b{`~>xN*rfg46B4T zv?5aF_PGemQ7<7wqep;e)nh#PmH3$N(co~{F;ugCFlP9(Vq=7(PJ%3k#}SGbT% zAYUIx)Szra&N5*%@bdCHe)8msEnBuQo5kQUra3r3D3`${@q8IP`#dO!jshCGvFkks zp_bzANsOVMZKKL9F)g_FZjsc#QIZ$~8^YkgMJRF}pbqb@3gshE082S!5NcY(d*5Md z;$1~W;+L_Tw{J_v)Ofmv;1Q%l-(je5sO>J_`=+KjX;W^U{@mB;c(cHt_X4bKX=`)C zI4x=%_pJ@|@Z6<#W&{N17&i#1XW7!*XS4ZmZri$*0x@;P%9Y136jx)TrPbUZWD&@3 z-B-~!mS)+r9!>WqY}N|Z@xC#8O3k3KaYe{7k~o0ZzgPFmmoICwb_la0T{G7}JZr}! z#Z9G{0;EYox~eBKW~+z@GsO(^x=@gQ+3$SxH#?YjKzV?!>|g4$bH|Q5K0VIP&LK*z zqqAd-7j5RdwWgqQeCe^2<$ge8I~?v){4`K zvR()LOFw@4B&S;#5gvZWRR>_}pmMLJ2>nG`JRs75K(B7#{DF_3?Fzq+=?M8h^!L9D zXYEb}kx^De?H;lI8SXw~YX+g824-*^8p;qwY zUMP7R&L1%^rSk{eZ*!nhi>;_Ef0JL320r=8bLZaH@40gA+J3~U0P`BK=HtBiG!Nd&^k;En32!RDbG+yh@NTd;Luz?#$Tx76s5pTf3^R6c4p}n9FvGMq#!9;0iHSX z#i3@M85|feVbo3TpO~090bv8DEtbS7IV1@^;jeY|Sg~Wrj*q3KDJ**)=W69RZr=H5 zC8edio9oMVj8>EV#yFkS&^`eH21r9PQ3`9a+*3D_*2$ulY0XP%0ojPR*_Rj+vKVV( zW@&jG<6??&(kW{9;Orbccu?IcN#xpw&H723aWY0Grsk45@(&EW1#{nh`L@o^Vz_1c z_3IZ?ePk3B-N6B0I_-x7iDM)b*W0&mN9r-LvfhJk z=BaiI2Q=y$+e*gPv9Yb?Fnm3Bpzy1BYi}?2_j-f#ii(UFP21zP_-PPWE}6gYPKb?V z7_qar4?&YQs0>>P^RQsTZ|;{39UYyC^^-*Dmv7#zhgctiV~?e|NG zyt2=~^wJ@;Ye|qTsoahUkc>^x@Qz_y4A0&z)_o84*p-+hvianEQFnFJ27s^|=sb5? z0e|zAwVd3Ms!-F`qw%Mz>3g$@4#D+lkBWF9VQ$W^z|Dv`9>cX~&+%i&t~Z@` zXveM=yh&Dz;}{z3>st+ZF&RR)VVe3$SEF6Rd&D+i)iA2V!p+T1p%91$zkUTpsKLS7 z-25HJB0Mk;pnC5Djxuw281%qDH~~g$j?SURo~4C)oCQBdDc_g(0zoD`Zg(i8ml0sQ zd-rbKMRNLQ%xqmm?1nUBV2CtwV77f2Ny)tO2Mv2Q`GF~3MNpxbqH;Ye)kn|ThH>nr zt6&go4rhb$BA%~ym!gr3)1CBLitvz)$r3|;!pq?BJC(d?@+2l6nDzHDF_a)eZ`{7U z9C_L@FtAowSU6Kdqy5mqViO?_H_e0-*c-s9e(SN#7(}3u=8}UTEkEt4n%2VZ{tov>edJv_|d8~88E|M21XkFKsI6fphcaqkJRfa@4?r4lEMzC5^u z_YB}T+t%;7wt^ljaL`lfT@m1#|8-voS^z=cXmhG*&n$LDNl!{jYWh-K9928~w5rEa zBr`BHbi1VpD}q&PJIm%r#Wz6?L=I~6a7 z)vvCI=C6j|;lQCAqZMNq^ZLh+Cr5uaY=ZFRhMY%pZu#-!Ch~fA&}AVX2q(c%>u>in zVawL!^V^mZYpkN8;sJFh-#|bjv|=j;L6)vL;l;Gga5Hv4tJWs|KsXj}xW7O}1Cut) zA}hTFxqUd_0Uw+@cP=DvE@eLxtE1c3cAH^i?ScDOz;J~sBuux9bM>W#LK4+FO0 z38f-x4C8LqvuIj8Q;WOBNd+e2esXHqE~UBSJjY^sgrHWH&4qj~>CkS}6^6O-TQx$) zPK(`lK8pI%d?vKwEA8I8EuC#`i-8eDB4SkKYwPPbb3f^2XKn8Pk`FXvvVA!n8$-Cw z?X|3G33jlA?m~yUgpq-L%=d8oeG>q(JCb+nwrz`!FMU+Jl#h>(QWKuJg_TuSN$D24 zN@hvvmoGH*i(E@>cynfdm0*JJMz@WH#fCxEJv=-c^6yD+$AL(Sf^Y-3mKpEuMKW0d zzJHd}+TwRB`B}UF^QTXXDHRGDwzda>=P~1C-SR|P^Bv}5QTslZ=4mX)a3Li+XpAX_ zZmNFH(8!#>u(uOv97e_Dw1efRv&S1zcfjdnfi#HS>|5DTG#`OhG zt}A@6RS&FF3)&h@!uLwwhOD^rMeoFFnT7*JbW$I}2wX{7w{D&5vu8|{nnkIfKAj4S zh*(0ocI_HiS;38ra}oLAtfa0%X;}vE?bxYPt1wP<3xG9Jf!iQOE#Qy^4etE7lF~*b zz*g#d0yu?vft-QIl|^w{TJcUibe6>wIN*1ntunww8=9Ety1U~K^th*h_6T5XOhb!d zQyDe2RaBpKTej?RyG0yZv{-ujMOc2RBqLG}pun$iOLQ^8P`gT&t-0i`8~lY|&*HLx zwn*eW!KuwU!dnQyyN8h<8VV`5!wHTm zDFJkh9{eyMfEH8%n8DQp6%WFm?P7qXNzBnw6q;X;Vou&f4U;mEJa3>vcx^ef9t!q) zPU?Gq)0~x*wq@TS@8&^4d~MkyC(GjVXUl8%q0GlFcK>x*O^x}?&JBo+;}7-$?*DG% zP~2~XhU!R~()?MFevx)p4xQ9H$X1TsyH~otNv}f7WkiEIrmek=jP>%Xn^bUZ_afFB7zux@jW%n79#C4kx^^wTg+%y(pp+`#w{`1L*%G{SgVb2- znLoFQdcBB(#Hff-=HqvSGD5TuViv&Cdx-&306VEGS9W7&n{BkeenXJU{2^G^vZy2= zSzgS7IAF9AK=?hbaO^lpNFi8uE7&jwoZ1qL*kTTl;k)-302v-GV%I?j1_12-o_!b~ z`VfvAy|1d;fCoJxCAAFNsyk>d0|UcF3_?KyWdv?nLiIU55`}>^+qOHqTKf8UYKMP3 zI{&_?XgMU_+%FD)2yo(sM`)06%1d3f0e`u9c!Dbo*5L&nZnxl~%2bP3T3H$Y?!x5R z1vx-Q`-Ot*`W8_z-b@RKaY-F_R)H~`%#m-a3sr?4s%@Ze#c2#_D)mn3@Y=O&Po6w^ z6el!>i+2Tm4baDE6Vst{@DBDpKVx?7 znw)?X3Y%61JIq)szAwK&j!8#&F_BB^K2%iDgMSkB-ssa`mI#gIa~or9yLqod&LqrbOpa^XK6!@7ZI?_NT<p@|(*6?cqO?d<4S^c&vr z=0kD62zbNc!L)8&UJC8s1^P?_obhMbm@;>CkJ*QXg?*)E!fQ=n6fcIDfIh=va1c_0 zYzZk#@$vC%vN!*|l66Td%F&-M5~Z(un00(&Bs_TMPMnWkLTPDj-M0AFKWkOKkCSt$ zVM@mJR24!YkM*CLR&|tAY@Xv2+EdTee6A!PWZ-tj2wB#0I{o}`v6#94h`I1X%vG7As zS3O^9qw!Q5T1lF&`e%(9d|H?_P$k+4Xyc4hQ;#4?S z)e2qRFh$K#%{lj*cHJA=6gA!f8{R=D)6t2;t(xW7^23K4^z$%MNsq^&P~iN~3qYVa z3>>oC?G}^;n-4jcQWM}bw}AMyv_Ipoefh&77qej@kZRWiFk?ykJ2P~actRRE=rVr< zZ435;CuX~lKU*PTVqv+sxYku=q@{Vhe36=(x?)<;_!|SpBGxTB7I^ORfl$eCxjSc4 z{Q@-;HXWm;XF5*p#~wcP_1(4jxpXK!<)vKIIwmF$C}!ILrXBH8fNdx|$VQtpdHj%g z&^Lhcr6qT{CrWo=wy_8!_$(ruy_%FfRscqkU_{%Jl5!7;iH*Ja?&7+jYc_&&b^8;x zG-fI-aIVnZd@}mH*{RTqym#+fGicI&%)Ibt&5AjcCAR1z3+s>MO&;0bDDfQ=F)Nwt zvdXgkS}YY6YXsTzv9xM&8*z0{oOjZ&Sg1ME2RTLrr(mRVr7f(yFJSIC4+ETWhAhy- zf>A{9Ny&V(NCpdVbkU0|zM#<|-$o>9$rM6d71(q=phvptXek@=q zhnz|DA9zd01sAd%MCYL0FHW?s8yNR#gW?Qs>&{69Dv&lX-~tiFd>sp6*qAATRlXaD ztrg6E*RBYfCK?V2S=on?yw4v#ybdmnGSiAt0s=-*TOUHVI9&ro7A^X1;gJ$o{o-drfIcu{T801(mlS@4o}vA_SNXXJE2+d}rjKu0J9w+6uP_XAZsw?iCkjhZ3KU zI5QvbS0x{YqZ<-{2@n;^alqAbpm8%MucR>{M$g9f<@p+1+GIs&1(zR9>TL3+S*%i> zz_{UoLn0#Y0YSn0II42qes&NRPE{+a+eKu(6@ll9>J$p+x24nc#+kX86dMR=S{!rt zYsb)tK}r^6R&|`49QF0}l}xA$#p>%P)$9BH*s4?dQb%J>@OFAy8kc5SOK}IHLPmPuB!o&-K ztHN^&{zf|hC|ck?qE*<9bhg5zCcA*)jqZbaP!();a!4T4N+~bnY-s8(L%CTMGu90u zK+tH<%-q~uf=JbcwH(nky)<8McqJZ2eqTohoHaFh>~V1)Q+)J3>C$;{549o;e5hr6 ztA@+%{=Q(c`D*f*YMlT&uH%6I-zIw@k9U(^$(JvM#I@7Try>3z%4~N{%qARw@x}ne zYA`Mc*T&ttLfiub1KmxT!uAsbytnYOzKdvT$fM;TrKi2v#o3@sG99tqM~oU2{-$f; z`I~k6&nZIY3<=@D3tvJ|L}~}e3u|JBcTs&lcBlCAZ((M(SA1S*BhktRw4=&Nb%XqY zzrMb8YgCuh-?7QjIvf0mkl-pNTAG-#wDfc|wIwfJydW+}pq`6yvxd5Q+Qq+k)2r!X z>hUj{AdZ+py)cW{XcQng*8%f_L=r6n5A$PvTwPvC$!fgcr4XX=A_^2?B7+dq(u+lS zt-HH>cXO^d8Ka?P6=5Jke!wN%Ts9sg_~lCqPAeUAcFt*&pVyR;xp-h}WL+O)No9|v z^#M%?RVEf`fPfs7?jJus3@+2g?gVgBJ-@Zr-FSEL=#?u`93NFK9D4?Osk(-5?_Q*&NLw|bE2usqE2d6E6YE7FSc>BPP8UZ-t)_ugNU=EbkH znnMEo4$@IS4UMtW#!Z{<>Ot@`H8qX#DuY;s+7{V-vTNu(XV7pRj+sB7SD+3q5a z0i2qHC4HF@5xej@m)5Q><-UmATya2%^o@U`r~hdk*J97(H{nxjV^oWT_6ys#mHo}R z>`;iTcYaRmRw=OA%*Os*8+SeO?fK{(Cx?1b z#55tJ4LE)M4x?z$5vt0IF9|{O!bB=9W(@P8uO&hrC9R71faFDV@Q{Qo*KgLLJ!&Oy z0N>N|qBahhwK+&j(?D~fC|uq{vwZpXefwl|f2Ei9s!1H*^G)W5V+(1T_* z7he0M14nieZ_*@I(k!wQDPkJ=<8=&q@qy#qSlvPMwx#-C0$^{i$QmwWv7Mjlnqs}{ zmB{i1VC=ww4WO`|1CSme+|%5$6?W@ksrCNYLFjL|Y32&48Ma~i?iIbT0SH0j2qK<3|f#6IcLgSG?vrKGcWFFoiFa#I!B7|}#_|c{6qM6r; zj)51Tw7|PQfxsTdfH|~_7Z}YzuU!P(chF&c4^Fo2&A?1xD;5-vj+wQ!G>8Z&84kzs zd?+fqdnWOJwfE*xJ+JNi_b0QBZJXzDXDk^qr6|;(xsfhdVS+2L|rS+xNJ$qe)rT?C$V?D_;_J@RT5EME4UXhMFGO zuJb=NV~x#%8J7g~OW0=XeuOSHk*S>zzH_*HpP>S4*^3VTBzYHVhCRL4KOHV{b7yN^ z?=^a?^1;z`cK;2tuch@{P;ju=rAj|uZwb;pprRrPT=8$sRQ3Mr*M0=LA`E==KMEBB zPaoVK-QrkOJ5R`xj$@w*8-Ve7Zpuj}Fg-a+Zy zUn~*ct6ja_odigl{wPgUG55rh;(Mmzx4HJ-!?*N%U^%xGjoxbrAcQmhN0T=2+oDPe zDHRP81i)bMx$m{LR-dK+-Nwbf{s934h79=|IYY{BbRo#FWq#S;PAiUh|F5>sLzHtI zk@jT-mhPUOAOC##pSNB7aOYl?Y@7L8fPMJ9hbSB_XU~0RJ?d|Q#N}((dV%dFd^(#4 zZ7n1);+7G??E$SctfplH0NiS-?DpQW8H=Ti|~65?@?#s)Fg>Ot67-6s>AEEI|4Y4ox9%&Zb8_>08?p0w4EB z5bh~0Ej{((22sHX=@&#c3hbqT)|=mPWW2`~Y8mh%XPw9vV9#`pWs&DW+y05@hD!b; z9G1Sn6JE-H^7F4()1z~5@VD0(CH11mLqN`KQQ?dbw);f3PLwqBx342s1!rI4G_}NkbnJl!?b&z9 zpG0mV4jx>9MAAt75*ojjA74EYXay0Zzt`1mfADI7CYr2N!4Ja$Y_2PN-Y;PEg-=<5 zUL)kqRiFm9&dF0~2Wa2Ji7GZPEiav5uAZa?7f$LphGHrylv}JLUs=OHQP|HDTk+Uz z^P@X&l5=ZQnMzWimS#Ipz%2At@m`kQu*$eE z>p#wNswpyZ^{z_Szmz(5a{B!spxDj`RUE?DB8Z+3@+aLXA3jzXXbC z{febar3MTjPj}x$b78`!>iefaW1q}ljj%d##sUlt_KaEL_yy9#?}d2_x$+K&fDSNz zcXwWh>HMgV03iE|1fBP3u5aoCb_oK+^zB8Mf>p0>h26aIsZXCix6kjSPw)yP+SM1& zpI_$fygL8KJ+rlc1fLSHdCG1d4K+^x+eQXh_`U8au>?=vTm+VT!?%-Sp2dtA92+gF52DwGabk`{hpGso_b@WKl=B$L_);GiP&dd3ca{5ZTsDb)>8D`KPk6WuMQSyCZNF^FRc-iM9sE&}lLu8;&IB?*Ku9W z-UvLPewFm<*Y93fGbuk6qRokQ*L7J~N8Z@nzdF~)*Y~Z`&==~wZcDBU{; z=h{^lk$7pjoVA`nhJPU!=yzI+r>7_N{U$pjg$Rd`+j5;-mAm9w#TA`Jf#c|U$2glP z;ojEf0(sS+NBtOK#u4hQKi#Y5xkAhc@{>$KlvO}dqAd}Ab~jpaS)&-VH@%CFRgS@7 zr+ubdmGw46&`);%^-!xKTA#|R;-~AazQzSRjbml|_IRw<-t@xUE}~##aQ6D@1ofzb zf_-ai*Jdt{ePKPvPs_|K@^z@~al7=9t8CXfjU797{@HbQBVxZQ3u(fgkvpAVoNW4A zGjz~n;ezbDuAo*Y&}#d-g1fO_jg7109KP+IkvwC-rZ2CfJvP;SizQCcw$)Yg+u*^6 zh3eqJs+so=b6P(n%BwWqJ5uyKy_*|SWOrxT)btvp6p{x7{*-8%+N8V~$&%Vj96m6t@fz-(7>;ylS~m*dA-!jl49w;QvTBOL<>{a9_{5>~3IE{G?yK@OH40N^4bS9c z*MM|Ua=%{V_K1}+Mkdqp!e*y5t|KS+oo zJ5Suz;3SZaHClrJXntx1cJn@tuPW`kxryVq<;(4?sS6eqY`_1{jmvvWpY%SJ&jC2? zME{9Fz?HLYK4ticE1;+~IJ38>3IM47E3GqLJ=MLh;fn;pRlR!Ewr!L8jhJ(I?#Sao zjt@B@t!Q;9UGJU5{#`^4GGM>~-l%@-XbG;HhuGMwP)(dEbtNYpT;15vAT6P}V6Cu} zIrB%1iSOOM0h%jFGo*pWXGfB|1M>f&npe?>N+4xES}9)^P|6|%PLq8jbgmC>UH1c< zx9J@QREi9b_^_xc*d$4_qeN;vVqeI>fraCDrP5i3W2T&(X7s`I^F0Z<^T9l$Z;)IR z`2ACTJ#wUIVeSr7TRU4Z2^mij6 zU&9H5wCyuu{sHMP#bd}_dm5g?D%J?zjvY|DDuAmN)hOSs$M_MNet2oqhc5 z*gXi?`JKtRx#n4DAHIG2sQr>3#7F$8=9vEh&bdS|Y_#GX*RpM($mE(fHw5fyB5x4ZOBEXgheJTTQfyA!YWjc&=($`Prk)~a3*4dmeO4G z;>B_3?&W9(?U8Gw>d`0IvS8OC} zmmNfZ7w>JQXoQUKffg&9j~bHdvV+#99S;2`jWggNkyanb^Te5@H#)1M2Nn{t!nYAj zJVsjMDs?O2K54KwMHEWX7W5X+Z04W-Bbe*`O#M4KD7kYGyke%W5@@~^t(6VHXYLJN z!W#)+{ zA#%(hMeQiq?mAY%_OQGAA6{cwFQ@dwLpU&j`F}SsFzBK}gTQat40=>wzguiOqfdnX zDsrP4xV&5t>x-mYRn;DMnmrpjNi*LRt)Pk++IbrIKx6INVFh1EFQM^NRnG(jtl`{!>X1HGlQIU(GyCF+e(_g?8Ot+9Hw=pm$*8cLgr)8-C53S#n0 zP8TOYyW=b6h8^B2C<=jx?#a9}1?XpwOIqy7zyAEwk~re=(a=%ml~z%rL-^~-&CMNj zjp!&DFuWp=TRQ4KIBR>IL;AyqhlJm6bOz9&FbJ0YNcY5$EDzs#@iPMNl~qp~Ag&4Z z#v1=t*`~zAio+zE9*w`9%9Ed}WqP3*{mX!+8FhDU&7LC( zQZ*`T<{+PnQPY(saNNlHgk6*i9u@aFt!w>_N2NQYx29)jpTZAV?y3TfBP^PujqrJD z)m^|GW5N_5WmQyFC!anlEaqmLc}vsE6JF4s$~itHHdSx>5YEX1rAuN2g4AR_-N!=| zGjse_>pRQXwiFi?S&~C;Ud;auDwriHD=l4C;!=Nn;<(^X?XN|nW{x9gHfVBc%DE|`0XfhQ}7 z(DzS;Ut5qqN=89`phP%vuFj?7uZs#m;QR|4y_3#Msin)!nsqaC{7Yb_ubI2v++F^n z=M8N5rcav|t+?xjrnmHYu!-aEVjBlC#b)oN=&drBEZ-n6)VZ$visBM$r+LucziB`2 zF_`wU89Cx~4nH5DlGpkY1q|7%|7I@PqR1~v|=svu7>3YsJy z=Jj8q`;N>HEsU`mB%XDUGH=s=xkQV;9xJ*BgIqD=DR8`L8XVqlYJOXmRNk6EVaYi0 z^D5l{DHN!hAmao5&K-MWONigInv?suuvWZ~YK#u@+O1pOfQ42~=Ul2I?oeK!kG-R? zL`liNZ;!udhO)BkPCGy8Z?P_Y`+UC1AAe-Sdi4)YzgQ z^F{N1%+tu+P;_mxySr`tlcBfYtmeA!yl|#1*$`8zA%2JBa?Gvc%NxDiZ~S@b$09=R zk|h`2-N7I%?9PkggC%cn=fCZJ9d`mpKQX+Eyl-;wz5Vluo4^+iJ-*0BS^EZPgaoll zdAK^=dd*Nwa84g)w)bI8hY(!jwEcSVu=GPW-)^Fmi8?Xa}=XT|I<>RWB2?I)A%|eY=es7TPa|M;^NJh<|gY zl$!oHG9es5*0L-eJ403T*$|pb#sLnU2Q^p=f39k8-EDCBRn&fEMQ;hr8I zVGb7%gdBAjJ?yb^N3OM)?6UT!rfKf}<5k=jG>AGv7v4S_9sFC&dn2r>^TN*-e>A%9 zXtv`mSjQ?=*99wfgk78h$m` z*3t2*sIWU)s%H`d523{a%NrHm-dwD- z)Knlh+;AJc9)JB^vTJg&y$dmP^CB1~8goa`h8;+=)-pE6;QlT&wT95%<&DM0xeg8o z?QVl1Nh`S$eWmw2kZE%Y%2&snHQT4&HwnP~+1@f|E&F*VjMZah=^uo(c90%dZGh}lFEozbJfuQ%U=2#pbT@jvo>dYd<2k~hy7 zeqO=uYA;PtyK|h1!LLt{yX`TP3bVdoRHFzXB5WlGw^o{ayDl*9OrXGf^TIvq&GQk9 zK$#mzb-kt4kN+^;dt=Y3lHyOKpcMP1cSn#xkOvN}I0%S>M)N9-2_AOk@>xpDK~V<9 z*3ei?8oO%Z-W=*Fk+hMUt{xpuqOIF4Nx-onBNwI~Sd528isD1C))nISB8E}W570W3 zE98keY^&iIEIui32;daRO?EQFRsqgUO88}5^YNu-WA zK&=()hn_ZH&Q(q7qao>pMk(P$*=;u4Q=kg%{ba3;@3C0yAuDR%&EqbF`_|@8T!~K7 zbta51f@x^*+`~2s=GtE(*_>9hs;qA@WyWlA608vWzI600`gr#-zw1eJ!j&tk-39|D zub{`fSVf zt|_{_P6he6k$)cMDa_p7mXw&7Z)mwZb2a(?718!TKr7~kf@QPZV`0rNn>@KW83n=i z+q;W^(4IugTjfa4c7YR`A2kkaV(YBo+lzawitFM*K^L?C00_;kUm7aP%1#Mq*#r-ZTEB@tre9scR#!5$Q#R}4++LFp-O~4|(hSZdlY(#3)@Z;)xI_rh)f&(q~ z!rEwXg`1c-_HUM*{Z_KE_)!K8WJ}p|L7=tSq)|`46O{+NwB5aX{KRdNFT$iSre=J> z;Ik#SWz+`_Xdk2HPY!#0g!~`c@zbzr)SmhS|IMr5Z=SUAEEg_&CVhldA!YxzzR)7^7hI;U67w7;=F5EJgoqTl&M#hc!xc@xv=Gx2VP*b#!)^29n@6z74PtHy;}gpR z*I=pUI@OrGy?;PaDz9lFHpY{g)#lzlRoy#pAF$afw6DR`P@p|GHwl+-wHJEnzDrqn zOKF?dD(gz8ouCF?c3P@rQ-*R5*#B;zy~|d8X~I?mQw~1KW!@kYN8Blj)*ztUs$)3c zO0~hM-wNXwE1!!7v7M30_*h-Yl?CD}vNv`F%orl2#s-CGni5oXY&4H-d+d3<@|UWr z($%%D=<|XYbkaKD5mP5mej0Y$pgt%vGbJTtX%`PVVm*#Tv~(Lh!+IYyCO!u<5F7vL zzLFqg{SrC9TJ1*jEcXGwKe6ZbZc~JK2laYl=qT)lCcclunRjAK>q1{?^|{L@D$72T z+TOfWC-Y>?FfAK|L<3F*O_Ohy%DOcfX-(*7BpV*%gSR!I(DpnsE!`-k?(xadCvW<% zh}<@z=?>qOU!r<vJ`0PTxCyUmH8xe(mdQeBi)=9hzNrEe;pi*s5gbyUK9~JCDD7 zy9fXUfk$kQS;J31FL=YyfMax%Jel2)XRv`v);!UF(_B=W|Hk%iBzpq}WaCvip<@eA z$zZe>oiQFdb+l{`l~MyCIQ6T-dSPlVJmGqk4H-6U&;T*dLhaBHg-cTd70ax)|K4-( zn9-vJ^q{I+{K}{3a6NX1%N7NkkX8>O@Tg@>DHu$4p)+`iXw+kf$a%pRPSm+rVZBpc z0%?1)IOg8ZQqRqa;LyU?CFqX}+j@AjZ*e}jAiDguZ*7Rw+ce^R^f0Y6e=Q*EDcvVJ zpe9bFeKDfcB@j2nXJ2Bz93+0K?a)t9FQO!yZ37Ka>f~ds_?upd^79I7h1-bKa*6Gz zWGMxPqzfwEy4|$=;~nIRo6or{9)pO3{p>^POq9%a`9jf=5?znwo5z+W_LQn>sq3X| zOFEj><|6eraxH{oO{%{Ywt6J@vp1+&~cqv7~tzV|5HBIoH*Bj662{{IInoZri^$p#T zI?xm6vTWwd{9?goT>rH{WY@)>Xv$~)BPJM(wjg2$b3FMm2xjW}pv5jfZ?0Sc(ab?@ z*nM!HUw#=JJ9|c;A}^A7H7)0nl$zrV;f&8vmz(=%ZjZ|Mj>x=qbMuy8ryBKW68wN#)eWyN;9Xlh+kH9pr6Qs;3`qu=7 zZ7L(Q?CtE<*_AoHh&Z!rdMPn`4W zil&sDM)ncD|IztYwa5f3FQo_+q{H)r)cQorCP?B@?R}ilHGH1+3bM0YpkpQrs%nof z(trD9-ZZ&Z-s+dQfkNW2XUUz_7us48p-!!vT()06-S;3lx?Iknkt3)33N=9O24n2* z&%L`c&2&2nJEd| zU^x~;*m(!ZCeG2O!ap~j8vaW6bE(>AQZ|&yGz)lal05NzHR>&yf^fDJT0H$?7D~1M zhEV1}CfZm2$pwx^$h+IlF6rU*xE*x4m9~)W=dcVsd9Pe?yfIV81dL84Dj34ncb;qcp&kOU)#=^=q-9#1iTtWV#*-Lwvz zIF;3ppdty2+F=x--`1b;5-O^+%!#zJVxok$5mj^ZtDiebET1?vEA^0&;Yh+dZ{t!Z z3G>?3JQ73BbqE4d8gsVJSsI*elIQT$a<0c$Hok~>lJ}%-*U%SBtHX<5Keq(2y<|6d z7iy~wZBy#&>TY7#0c+k<0Q3x^vUIv2^&WdB$K0(7yY%&k(#;P)b$MkyIr8su&hVI` zeO%Z)&^WPgm*V%PgxtE0<@cwYa_uUa#o9Ta_+KqR-@VM*h$aWTkbeDJ!X(jf0Oce7 z4=kL!b%aX>*mA9+(bZrmE3@ior5o=+vTp7BW-!|NHFB!$Q`%06ft+W^& z(Fj45{uDF(_RhCS^*^UpwR(O0w&s??c$L*%y7lU|%&ptw1J~T#v{ZsechM-HJ~7rK z=u%z%5%(Dp`F*bE>#JpJEHzS{(Q#(FD2R#%nTF-D7~V7nzXbkE<;HNo6|Q`L4B-49e@a3%m7e`nkT=Q0SlS6 z37x}Jl3&y|yGW(xuq?ba?k^X}iF_MjCZ9Y{S4(TWx|sI-$FbobZrsjEoz73yHM@Wh z=ipoyMB+`;^-??w&I_|HsuNT1GuT}vw}0KGslc|36t2ZO&!gYM52W3Nmu)Rc%gofr zupb+j&wY2jzIhpW_mF`DgOodJy+-0w56E2Z(nQ(w^~PB`(5{g_XV#X2o=ovd4F;VR zHd&dDVGa9wWA;pPCeu1Vs(jzq*aA@lf9CES77tzQtZ09WJW_p9ayHw^n#`C{J}P>H zJC6~L(kqjzBa<4aXcRgZ;LNA<-uYtZ&lTR7X89lNJovbxzWeCq0WJrBR~JJqa=(>) z^Sx+*$`q-`)TM4Q~>ZS!5tPs%5J%KSrD=`i%__Klpjz-^WF_gT-1Q8Mj@&8;tP zWVp1Yk7cYb8Zon-Z#0+uQ=9($`JR^a){dPFS*v8g%}Hs-7oSm_$*lE*y%ZW(Wtz|l zD<+6lbBZgU^;8Jq^%q`IsBesjuuzDVCi(8@fz4mA!lf%{wG zVVB53;9tPd&MKrAQp2X5iIm+{7#ERSg6372S2rie$FBz^4c9en^U3yxf}E*7y*UE^ z{td}LQ)Xgx_M6s(5~>+Dk6OwKO7sVBKCS#^YI<`O6JIrMFAqbsd^7}A!ep1z2 z`BpI?Ksdn)PY&1%JC(g%SBlX(6RwwSK1`{z@WUBDF%^TMcm_1%-7aa^RY7Z?o|e`n zYUOKa(^}ZkWwm>^npfdx+|d~2s_wlr>Fa#ceU%}ZbuLQ+Np~*sypkW;HaAT`5Vr{b zb>Zd4!zPkQU)ua>HRc!#rL@7)M5)bY1kV;j85aSHJ@{eGR@C%kTqmQf|2ygL70U$j zBLbvkJN5}JFmRL9!J~ee8k+hso1W)NqpS4oiK#^1F&iKL>Ec^>1YCVaKMkPm1RZfWKI%gBVa@y0XNLR%$ zhCz$Y)+pzx4N}&naOKv;Ad&Kl3K?t`gzFV;@!uONs-fe4Tx9SR#P_? z2gu^MLc1Axa-h(7SL|nE$aM8d!+K$Ty4EbpfY+K>6&6nI6cQeps6mSWppj8h6+P!L&OM(C4p-(-bsYT_W!`=PM4GO&8o1=KiV0k8@DXsZdwUId1Gbn z3V~!Jw~Jz_eU7;EE~n4TyO#*2Zd<0mw?`?j@A~ucI2`db?$LytNcdiBy{|JG-NG4Y zw+K#Q;YC`p)zx0^rG)Lb@*MY(WX5O|c;?965U+ zFxd=G6n37zB8)J_SquOrCLtamU%?{kB^hX!ANzqfujA}KQY#M6v;^OZah*Sp47&Qx zK6gzbS1lZ2EB2P!z>Sh0X+Lm!;M#Z8;K601L(j~|?6qQjDZ#h^g|IywU^O)5o%-{2 z2SY_*SXUuLWo{o|6GjfSe!U}xd=Uedy(VYwX(HQABFR*)n5!kc=JW7>6mw8m8w<~R zhP`AQ*5vDD4{}|_D8Nzo5IzW>L~-qS7|wq0W{auazE6G@a{+;3t#bcBfoxZE@cV|N zW+o;#xQHg<=_!guix!U>K@jVZXEu^QRf^;SKWE|AT{}BQGF1q947hNq zTiPSih|ffuYzWHC?*W|!NLMHMxafy;%@`Nda5ROiC1ThHm*~}o*t=csY^zWfZjf9n zSr=~i)yKw>me@I|5mUHTMfeyiu+q0!m@^?OSR>G+rFrdn_C9KY5D443<5!=#!aC^D z%>!jp9;cwcM8(^#=ga0N8=5~yUF+AcA2V8m#UwK^2cS9gqAYAWt0@MvOyS)*)@ua^PcFCeL06To z4GnJW-oe(wlC+G$81QnSZ0BGh{mO@>_wo0?fi0CVn-r$0c6ZjQ(Ir-C69vW)v7x=n z;5@3Pb%qVgX{)CiP!BUrihJi=F(!0ivapdAvmGd?#r!#8&?(0I3HpLdfK6p!%0)8s z1}eI6t{-eI366O&Va^hDk?^s-;;pq?C`mYE$oE1xwt6a=e%NYYvgdC*gLdkUc{__1 z3JX<{Fvj8xFVaN6Bc8)`P2M})`jjhi=T;ZLuHxU&G@C7evr06SLMrm)6b z?ZjJFL~;SQ*g2g`9vFc-FS9}@hNdlAv}nAFuSU(8^$ZCTBMk+m1X3x+f$}cdy+)Ij z78gPfg`<1bLMZzZ{(;FJKZwbsN@5m@d{L*OtgNhkS~~!l6|smaKpUOGbQ3T3ZZ7F<1R<*; zG8<%+X0MfJO>B?ujQ`LB5AxL@?oTwx67v>5C}i6BNOa_p|D zn0SaRMVl&BBzN@V*o*tcKt5LFVXVLIJ$uT`o2SRxi5Cw#n7PG_UDe?U2$P!D5Y{#*MQ4XOj z5=eJ-*!EXjycS<&M=GRVkuUF)~{0Ku8={~9aQVbbd2Yv}kB847U4YR+k&eVa3}CkcZ+ zK^wXjp$x-e{DtGpI?X0bw^p@$`#Z+Mn2;n4S;aP#dYoim86;9)l6Eo26y;La%m93P zE{msR4q`K3K=GXVZ(-KOm}%Hf{i@A5WGM^+6t&;Vf@GqoQ)#Rh4!_zvoi$u5 z1o5+&F+diJSD(@NQA}fAKDE;;l<>~?%%*rR>wY|l#J#+IPBq0kQFKI3tB;UHL z)k^RC00S|Idg!1*!XMS{zzbw>0TxwWqZVXd@$ONwc9NW2WLlw?i`YMnMv3i{Toskd z9j4p~)=C_;aQ8-)Q65vAI_4|OA1(ljq_2Ud10(a(3TBCdlcV0^-I=0$yS&?5mnhQ_H zBiS&m*uz`~{rIu%g@5JjP(Ragx7gRv#ANiWHrgtB{cO*>VOW1A@I%7;nSYIPZDuQ( zGwdT8CaoprEKV+#81?>xNbc2LvnW#FH$1;DG%Vd@rRfzqj?0rl_Go2&!`}EI#Gzxt z*HdP;&nM}dnApI;IJMtvE;KbEeX6#Gr2QCEuAew2h#KsiG`ePa`97~}0*aaowUh$d zLzDZZFf^~d#V@PkHfj=|!m*khz%vvTlODAn$*SmMY5lFNMz`qgo%&tI>7&e|%oeAo zrQHHkJUhZ5aq(37+q8|H-kRK!7nLDqKg;U?@Jwq5PtuU{-q#?P$28F^EZmt+w=Ppa)vblbpal@VdW{DWy zpB9nFFRsBF>_o@>U5MBIR9uhE+rZS)7hjEB7kYwi7WK#$eQjQ2duzc%x`VpDZLN1A z8TX}=%iLTfY5z2V`xG_3-_;auzW{-}ZId31NdKRi=l$qkH)L<7XJEXVXSFNuYf>@Y zYl;Wk;vH&=n$LVI$oTT_4rnfnmy57h3=TNoTHW`8rK`+QjPP`lS%o z6}~-Z)~`A3rb)K0%c&LS%G>Q&mGiOoHICXDUy^tH^eqrMt)^?Gqig$PPF`zDe%li- ziVkLlWHn`I#5s5D*pK|tzqzF_`Nv)*sq{*GC;hs|iD7q5etvKIEhu{5w-EIk^AZw$ z%MlaCap?t(rMr^$pp1#Er`=We@4&q0Uw`_TRft9i^#28peHk>X9eO;yF3mR}V24PW ztlw~yR2fC0iHi4KA)x z73X$(L{!){8}aFoHgOr>S=Xm@aaJQ~h$#9KIcx0=ih zGa2Xd{gB59brmUOFuKCVl(s8KJ@W#xz2^ zISoJ2v}RYoxg8q2v-mBo2-_)VZu3`fnH7wFev8la0yIB6=tV#6iT+;u}fe z>i6Bt2SCBb_a^_WQBxkAEXRiNiOBFU5}i z#JZyk5d`ovzW)ASGCG4i<>o{cRL{+qYrpene#RH_w7F3Q)`XyW^0$e|7YeE!G|z#C z?BXrlLaHnH7C%3MP>O2mSfBC0R=l$LQ!dZ%8X#I`M zXtx!Iqhs!x1qvmnk>I?YS75S*#T;`0@TJ=^2JUP}c~`TK2Kx z1#~t63->(AkR7juXGFZ13Bw1N3qOr6pK2)%-+)&Hv{$GNGu=vRa+9%AuSCLP&Cn#3 zkQRu6Ep!KSlFgNuuWxptLDb6-J-Arx*tJ-ZH|A5UiYF*WZiyu_W$q%MoI{&IaKx&l z{1!H+cJD42KHs&IrR{t2xPH?qyMlM{S#GzH1Y zhd)7F>HoDTAvZ%qE{*oni|uu1lVj*Hc}+ECg-4vXKr#x`z^M#L1p;2)xpe7)T+)|0 z*2Rgj2eAAyH;-e~5K8|52O=(YgCzvMBjYy=@w)vGxq8*e#pSLznMw1un@<{YSs)UoG>HO8^dOdl z_(o9ZiThwM{q^1VguF*=jQ!|({5OdT#LoZo24#@5TVkexh(oMJlJ+sP0*O$D%-py^ z2|k|$SX?{nyHKF586U-|2R331XhPUDcLbP!%gb;`%ZiI(Lm1I?W9m|y;z)96v6ra{ z#d}i6h!C<$T|K(9Ik2;LuR$*|4z^cs5Mt3Ucaqz8B-Fpd#tcpU6Kp$n3Q4D5E%leU z?W8jARX7>!6XfYAh6*t~(*ITwHx?`2fLub^Ku+HyxqNRX12jL}tr4UgrMVGPJw@Q0 zy!PUk!+XumIcIJGM2aKXy4(!XtQe0`@ph}BKH6)&Tjagmuy z;%SR;D+W~1+P~*Vh`Bl0`wxaMKgVcqE8`7(nWW8MMbjK~-cbEn5eiSYrYhIukb+L| zh#OR-L(I-3U>lfJ>LKI*Rgi9`8<1v7ui|7C4SyqkoK0>^VHL_U9Rv2v!l~H7;DJ*G zAw6Hvnn)13roE-XsiJgNo02fH=Xq~KhZjq@S4=GsMYTvtexRqnm_6H!)l;u{;^@)q zuacH`!3=~!Etzif2OgHXu>}yP8D-~+i0P#hAJ>x zxzZi3e&VGzGJ;qb4I^=76DK~xDrOg~rIXyONDet3M}`=JMH!QMa3{6WzsIbHhq$r* z2(Z6!VmrQ=SFoN(9q?@zPyVYwD*Sw0l8O4;nLv4mQ_~v$_wOS|jK!g4ni%t_QO~s3 zU@3{R#n{`wTnx4My@8t{HMKx?>Q`ok%>l)vHCLZ#Re;nu>`MFJeAx6$wGBBuHv)3d zoE_uZULoe(jwW4HqXI~Cds-~x4S7IfgAM%wspE&whO#@DN>u>WXglA-WExeMHqsD0 zZz9lwU92x}E}XZhdG>{a^v&f0#$-7wvYR~ z_}@`Qv|fq9nqm-%+KLt7Bm+{#oTfid8+|)DmJWhq%nYhtM+XClZ>X#kJ&23Z;*uBI z6@St@Lc+;}@A>a*4~2X!NiMM}`62*wK_YJ3##oI4uKY`jqL-(qV$|4C2bU<6c;$f_ zMN+~goY;lKU+nxTT86?PKxKdG6l;hlycNQ4s!EE^`H_JeAHbwVN(d;JSRO_KDMmht zdB_0!COZ|aEv7T15WHmZHsfz7bP|?y{M-V{R(4}Xt4;5I{q#0%x=dePBrd>#KkUi; zn;(A7ytro2ycs_=FBJ8BGh zpSP-f$YP0PcsF&`rMjn2H^RN?X;(@l3-5%LtqkFw1m8OsA??@VBmeyE7HH z(&g5%hr_NYxwhm6HW0wCYxp|c%9#?4^1`u&D47cMq8KH?iD21oTXa`JU;rkdL=M^L?-W|*CJ) zaEU405HVe3Oq$Nx>O)Kx{0O$cKQ3G}FNWH75(Ssn)y(>(Q&iR%D(<}uAau&sNvvhLEQ4@a!2=%04ba#V^%p#WBCecx{A z$Rvf_4J~)0IK%frU944T=keHSrZp}HzgMXEV<(fxZy*8<_0WJ+*xeHLB`1`2MKL@@ zP)MSWOqiB>96NR+EXCkDNq@Cai7i-(Z@nILY1n05J%N1)Af6@K2>vTj);!ZI<6?PY z>N*Kd`7JU7T{RBvP^+Tbo55?_vH5v>+rkS+362)!V%QR&^Gl{qUAZJM%fa;5;j`Y` z3>y)Gwske=My|*!Hz_D6Nck9Y7UZHZ6KoqnN=ysePI~DlIvd$`;ZL3ZvfBi#L$9W7 z-2MtrxCQm{UDkGXc0=gBVSI`JM`@A`%emG3t;|Wrn9mE3QBs}^4nWL}GAS?xGzq`2 z72;oYn=omuH>~oQJs32Js_lVsz=aY2=twCaFu%$Xza33wxg!W1h@}|9Dqe=T8B7dN zVn!g8NK-S_`+W7n&Kv*;()XKHa%s-N>~AA#hmPhf*X?|nzzF4FJCBc3zOR$JUumqL zU$yfFvezMF$C`z#Z8;yg8dl!|ud0Q8NL8DI?c}RmvzaH2p0Z86v)a$KPTL!_96lijN*N3;K0YJAs1+TN@vx zcOxwfsQr*(rdG-QF#;!IgrME@sY9t!yJI!_r zyaIA905}Ex=?98p>?$-5&~e~>1;V^yVI|49E)}Zan$fNT+3MFPJz&h6Ua{K3nf*|t z@g9~0TNRI8x1pH5;X1I@kvVqt`sGL00$evhu2nO#Am`H3zeJ-^`-lFnT5wl-!@64> z@~eEYphOIxg)ff5kEj0Dvf&TB{sLa6%E;r-=X}gq&D-5JJwq5Wi^Si`OBxSHKy_VG z?$VDR1-IjtjLQLHn~y;~**x!N%j0fgNvGvoRM_yoraL>a)RFcX-E`()*m8D@f1gNh z<7u!4?^5DFFHh$}6?`>;)e_OM5WLd-=C6`Q(D`bG`AU7{&I&SSOWwwk_(8ScS?Pd*MsApo_%_a7&UME z6|lW95H`Wwi0#N8alZF1>|i+X>6E7#uwdbYPSH=m&A+7wgiK=HDiI~!u!E+802Mj3F8dg$dX3#@5^aMaf z+uB+B0vei&c%I_s@6>Y1HAf(6aC&`QSgWmpvtuEX+}>wza2A4NYxc>a-S6Fmj>C%g z=kJ!?dB@(hGTEk&l;meKt;qR21uNa5A?*0;G<&x{`WydI) zZ`)Q$6sFg;|220dB)C95{*}v@FE@Ss^z;$a-4DC-xK8kUy5agW#_s7Md7#ca^g$28 zFdKk?hBH!^lj$u~+_;1SP44!%NWd^;dgJDz*CT>XCszUG5X9M{CHU0~= z#o%cIk(i3gGEsvE)CLl?&(kxQhP(n8^kZd% zos+cWy^9VM!r?<5D&Or}S1HGtc#}eE*PQM`p=J^|5wqJhqte9X@Aj^rqa_=GTts9e zjYpK*mZDrv5aB~;ERxaU&w zklTq=E#d`I{~I{XmU=%WB8e6^G-8C|hZ2vc+ReX6lxLb><ISMa?^$*%;FE*KZ z!V0pUMW^X3wFVEsc@bm8-Tkyu_|{cTnV>_`84l|uIBEVh6zQR{;IsiB&zw4S8HndI zQYbyn4x#7(FOZQgTTwqWD}hs6yeeASUQ+E|V|OwhvNBq+$x`8-}t`l3}!NJoyE%@LbcaiA%siZ z>YsSc-(k%#KGcR1o1BJnP;UJsyRk*+DNVPfUrfwkQhv!XS!9QG6bs_J%B!vBG>PC ziFKN-a%VW-^w%Fc@a6Sv(9p9K%)ew(E~`EcL<{S7G5&?5SG)FswE-GA?0U_*2l zw<@%sH`q8+*}9eVG^|%oDWT7O>w9Er zF_U_UcdWcr`K?OrR^%!p5f{>eE~73UB=zZKHmWs&LP%5_DPpFxfFR2><9UGBAOrzz zRQWYASlH6ia$fRf7Y6lR%2Qo?%_z$OWYL9`A! zOMj94d?<9H$QbzO+lxx5Y2_0=pLV)dzO2*rsVgtm`&d!CS98Roi%RH}R|7yqnyP)2 zQJmq^pi=JVM;?e4PHU4BA{Y6wuC}xGhJDqf+b_Xs?^){bwFxh-gHH@9UgKO{!DqRRwK6vVJ8`hG8#_$5C3`7pEPjq+d~v$4n#jk*Ws*9} zMh}>Za6$#qN2E3M)QZ!P6f~=4A1?VObzs85HJOuKUuyZKb*9~A7Uo*kvtq=_N1f++ z-@3zh`T}W~OtxY6AwMhx0?Z>wXb&CfdbJ(wM042S!MayF%aWD1o-17FI=-}aZ$5v071^?~(G^ z(GiKVtVpB3Ys!)2_5p84D=Zc@F`gyiNy8sFi@GJxw4207OpOJd0g}^o($4g&tGT2& z$|*IWyOYK5cNGIZ`1Bi}we6X+Tura?_`wgFHnai94kRQz(NjvI;`6(f z(sKN=g;EQH8HZuz7U3KoRVB%;;%6ngf2Mxi^GHS_(KH~xA|uf&P7$x|U|f9J@PFT* z_&+z9E^7bhKO~U?_4)5V@e{Qu4gcq#_Z}(({`>F$x4-nu^_nh{a=&@jzL;QyapPH! zP)LO(iAeL)Poz9T5X6~PCHITOUr&s$CMVztiNv^@$HJ&r5yl4W1S7_C-8I|gFs$cK zLrZE8P9{E0e8>pV7)h_(Wdz3|iJn~~7J6jn;v^F{`2I%iIX^w83IFF}CH{ey%m1HU z`~RyO{@-8v&!hf-w_N`B>HY81`~M*7{O>aQ-(~c_%jkd3y#HT4h!)p_qgM>=t-g4l QxO#OpP1VR{8~^;j0HQc<00000 literal 0 HcmV?d00001 diff --git a/projects/opendr_ws/src/opendr_perception/README.md b/projects/opendr_ws/src/opendr_perception/README.md index 1da6f00613..6b0ef1dd23 100644 --- a/projects/opendr_ws/src/opendr_perception/README.md +++ b/projects/opendr_ws/src/opendr_perception/README.md @@ -44,7 +44,17 @@ Before you can run any of the package's ROS nodes, some prerequisites need to be When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing `None` in the corresponding output topic. This disables publishing on that topic, forgoing some operations in the node, which might increase its performance. - _An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations._ + _An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations._ + +- ### An example diagram of OpenDR nodes running + ![Pose Estimation ROS node running diagram](../../images/opendr_node_diagram.png) + On the left, the `usb_cam` node can be seen, which is using a system camera to publish images on the `/usb_cam/image_raw` topic. + In the middle, OpenDR's pose estimation node is running taking as input the published image. By default, the node has its input topic set to `/usb_cam/image_raw`. + To the right the two output topics of the pose estimation node can be seen. + The bottom topic`opendr/image_pose_annotated` is the annotated image which can be easily viewed with `rqt_image_view` as explained earlier. + The other topic `/opendr/poses` is the detection message which contains the detected poses' detailed information. + This message can be easily viewed by running `rostopic echo /opendr/poses` in a terminal with the OpenDR ROS workspace sourced. + ---- From 6ef83b9410df3a4744a281babbc1833a576bb531 Mon Sep 17 00:00:00 2001 From: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com> Date: Wed, 7 Dec 2022 14:15:04 +0200 Subject: [PATCH 52/57] Minor formatting fixes in the diagram section --- projects/opendr_ws/src/opendr_perception/README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/projects/opendr_ws/src/opendr_perception/README.md b/projects/opendr_ws/src/opendr_perception/README.md index 6b0ef1dd23..63dc03d050 100644 --- a/projects/opendr_ws/src/opendr_perception/README.md +++ b/projects/opendr_ws/src/opendr_perception/README.md @@ -48,10 +48,10 @@ Before you can run any of the package's ROS nodes, some prerequisites need to be - ### An example diagram of OpenDR nodes running ![Pose Estimation ROS node running diagram](../../images/opendr_node_diagram.png) - On the left, the `usb_cam` node can be seen, which is using a system camera to publish images on the `/usb_cam/image_raw` topic. - In the middle, OpenDR's pose estimation node is running taking as input the published image. By default, the node has its input topic set to `/usb_cam/image_raw`. - To the right the two output topics of the pose estimation node can be seen. - The bottom topic`opendr/image_pose_annotated` is the annotated image which can be easily viewed with `rqt_image_view` as explained earlier. + - On the left, the `usb_cam` node can be seen, which is using a system camera to publish images on the `/usb_cam/image_raw` topic. + - In the middle, OpenDR's pose estimation node is running taking as input the published image. By default, the node has its input topic set to `/usb_cam/image_raw`. + - To the right the two output topics of the pose estimation node can be seen. + The bottom topic `opendr/image_pose_annotated` is the annotated image which can be easily viewed with `rqt_image_view` as explained earlier. The other topic `/opendr/poses` is the detection message which contains the detected poses' detailed information. This message can be easily viewed by running `rostopic echo /opendr/poses` in a terminal with the OpenDR ROS workspace sourced. From a75279e3de4355c451b43112d7d3704991f3da23 Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Thu, 8 Dec 2022 14:08:06 +0200 Subject: [PATCH 53/57] Fixed step X dot consistency --- projects/opendr_ws/src/opendr_perception/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/projects/opendr_ws/src/opendr_perception/README.md b/projects/opendr_ws/src/opendr_perception/README.md index 63dc03d050..6deb31393d 100644 --- a/projects/opendr_ws/src/opendr_perception/README.md +++ b/projects/opendr_ws/src/opendr_perception/README.md @@ -11,7 +11,7 @@ Before you can run any of the package's ROS nodes, some prerequisites need to be 2. Start roscore by opening a new terminal where ROS is sourced properly (`source /opt/ros/noetic/setup.bash`) and run `roscore`, if you haven't already done so. 3. _(Optional for nodes with [RGB input](#rgb-input-nodes))_ - For basic usage and testing, all the toolkit's ROS nodes that use RGB images are set up to expect input from a basic webcam using the default package `usb_cam` ([instructions to install, step 5.](../../README.md#first-time-setup)). + For basic usage and testing, all the toolkit's ROS nodes that use RGB images are set up to expect input from a basic webcam using the default package `usb_cam` ([instructions to install, step 5](../../README.md#first-time-setup)). You can run the webcam node in a new terminal inside `opendr_ws` and with the workspace sourced using: ```shell rosrun usb_cam usb_cam_node @@ -532,7 +532,7 @@ whose documentation can be found [here](../../../../docs/reference/rgbd-hand-ges #### Instructions for basic usage: -1. Start the node responsible for publishing images from an RGBD camera. Remember to modify the input topics using the arguments in step 2. if needed. +1. Start the node responsible for publishing images from an RGBD camera. Remember to modify the input topics using the arguments in step 2 if needed. 2. You are then ready to start the hand gesture recognition node: ```shell @@ -562,7 +562,7 @@ whose documentation can be found [here](../../../../docs/reference/audiovisual-e #### Instructions for basic usage: 1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites). -2. Start the node responsible for publishing audio. Remember to modify the input topics using the arguments in step 2. if needed. +2. Start the node responsible for publishing audio. Remember to modify the input topics using the arguments in step 2 if needed. 3. You are then ready to start the face detection node ```shell From 863e1a1ed4bc7815c14db2ed28e3b2f3468a374a Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Thu, 8 Dec 2022 14:08:33 +0200 Subject: [PATCH 54/57] Fixed step X dot consistency --- projects/opendr_ws/README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/projects/opendr_ws/README.md b/projects/opendr_ws/README.md index abb55d5e6a..05d0389202 100755 --- a/projects/opendr_ws/README.md +++ b/projects/opendr_ws/README.md @@ -47,15 +47,15 @@ For the initial setup you can follow the instructions below: ``` You are now ready to run an OpenDR ROS node, in this terminal but first the ROS master node needs to be running -8. In a new terminal repeat step 1. and then run: +8. In a new terminal repeat step 1 and then run: ```shell roscore ``` - You can now return to the original terminal from step 7. and run an OpenDR ROS node. More information below. + You can now return to the original terminal from step 7 and run an OpenDR ROS node. More information below. #### After first time setup -For running OpenDR nodes after you have completed the initial setup, you can skip steps 2. and 5. from the list above. -You can also skip building the workspace (step 6.) granted it's been already built and no changes were made to the code inside the workspace, e.g. you modified the source code of a node. +For running OpenDR nodes after you have completed the initial setup, you can skip steps 2 and 5 from the list above. +You can also skip building the workspace (step 6) granted it's been already built and no changes were made to the code inside the workspace, e.g. you modified the source code of a node. #### More information After completing the setup you can read more information on the [opendr perception package README](src/opendr_perception/README.md), where you can find a concise list of prerequisites and helpful notes to view the output of the nodes or optimize their performance. From d2cc4493a8dc96d48eadd328d391a488468ffd00 Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Thu, 8 Dec 2022 14:42:32 +0200 Subject: [PATCH 55/57] Added detached & in instructions for running stuff --- projects/opendr_ws/README.md | 6 +++--- projects/opendr_ws/src/opendr_perception/README.md | 12 ++++++------ 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/projects/opendr_ws/README.md b/projects/opendr_ws/README.md index 05d0389202..cb9db053fc 100755 --- a/projects/opendr_ws/README.md +++ b/projects/opendr_ws/README.md @@ -47,11 +47,11 @@ For the initial setup you can follow the instructions below: ``` You are now ready to run an OpenDR ROS node, in this terminal but first the ROS master node needs to be running -8. In a new terminal repeat step 1 and then run: +8. Before continuing, you need to start the ROS master node by running: ```shell - roscore + roscore & ``` - You can now return to the original terminal from step 7 and run an OpenDR ROS node. More information below. + You can now run an OpenDR ROS node. More information below. #### After first time setup For running OpenDR nodes after you have completed the initial setup, you can skip steps 2 and 5 from the list above. diff --git a/projects/opendr_ws/src/opendr_perception/README.md b/projects/opendr_ws/src/opendr_perception/README.md index 6deb31393d..2fc7248a2f 100644 --- a/projects/opendr_ws/src/opendr_perception/README.md +++ b/projects/opendr_ws/src/opendr_perception/README.md @@ -8,13 +8,13 @@ This package contains ROS nodes related to the perception package of OpenDR. Before you can run any of the package's ROS nodes, some prerequisites need to be fulfilled: 1. First of all, you need to [set up the required packages, build and source your workspace.](../../README.md#first-time-setup) -2. Start roscore by opening a new terminal where ROS is sourced properly (`source /opt/ros/noetic/setup.bash`) and run `roscore`, if you haven't already done so. +2. Start roscore by running `roscore &`, if you haven't already done so. 3. _(Optional for nodes with [RGB input](#rgb-input-nodes))_ For basic usage and testing, all the toolkit's ROS nodes that use RGB images are set up to expect input from a basic webcam using the default package `usb_cam` ([instructions to install, step 5](../../README.md#first-time-setup)). - You can run the webcam node in a new terminal inside `opendr_ws` and with the workspace sourced using: + You can run the webcam node in the terminal with the workspace sourced using: ```shell - rosrun usb_cam usb_cam_node + rosrun usb_cam usb_cam_node & ``` By default, the USB cam node publishes images on `/usb_cam/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument. As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.** @@ -24,15 +24,15 @@ Before you can run any of the package's ROS nodes, some prerequisites need to be ## Notes - ### Display output images with rqt_image_view - For any node that outputs images, `rqt_image_view` can be used to display them by running the following command in a new terminal: + For any node that outputs images, `rqt_image_view` can be used to display them by running the following command: ```shell - rosrun rqt_image_view rqt_image_view + rosrun rqt_image_view rqt_image_view & ``` A window will appear, where the topic that you want to view can be selected from the drop-down menu on the top-left area of the window. Refer to each node's documentation below to find out the default output image topic, where applicable, and select it on the drop-down menu of rqt_image_view. - ### Echo node output - All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command in a new terminal: + All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command: ```shell rostopic echo /opendr/topic_name ``` From 366a7580255f59bf1bad48764f873601b55e128e Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Thu, 8 Dec 2022 14:49:11 +0200 Subject: [PATCH 56/57] Removed apt ros package installation instructions --- projects/opendr_ws/README.md | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/projects/opendr_ws/README.md b/projects/opendr_ws/README.md index cb9db053fc..ac84fc785b 100755 --- a/projects/opendr_ws/README.md +++ b/projects/opendr_ws/README.md @@ -1,7 +1,9 @@ # opendr_ws ## Description -This ROS workspace contains ROS nodes and tools developed by OpenDR project. Currently, ROS nodes are compatible with ROS Noetic. +This ROS workspace contains ROS nodes and tools developed by OpenDR project. +Currently, ROS nodes are compatible with **ROS Melodic for Ubuntu 18.04** and **ROS Noetic for Ubuntu 20.04**. +The instructions that follow target ROS Noetic, but can easily be modified for ROS Melodic by swapping out the version name. This workspace contains the `opendr_bridge` package, which provides message definitions for ROS-compatible OpenDR data types, as well the `ROSBridge` class which provides an interface to convert OpenDR data types and targets into ROS-compatible ones similar to CvBridge. You can find more information in the corresponding [documentation](../../docs/reference/opendr-ros-bridge.md). @@ -17,37 +19,33 @@ For the initial setup you can follow the instructions below: source /opt/ros/noetic/setup.bash ``` _For convenience, you can add this line to your `.bashrc` so you don't have to source the tools each time you open a terminal window._ -2. Install the following dependencies, required in order to use the OpenDR ROS tools: - ```shell - sudo apt-get install ros-noetic-vision-msgs ros-noetic-geometry-msgs ros-noetic-sensor-msgs ros-noetic-audio-common-msgs - ``` -3. Navigate to your OpenDR home directory (`~/opendr`) and activate the OpenDR environment using: +2. Navigate to your OpenDR home directory (`~/opendr`) and activate the OpenDR environment using: ```shell source bin/activate.sh ``` You need to do this step every time before running an OpenDR node. -4. Navigate into the OpenDR ROS workspace:: +3. Navigate into the OpenDR ROS workspace:: ```shell cd projects/opendr_ws ``` -5. (Optional) Most nodes with visual input are set up to run with a default USB camera. If you want to use it install the corresponding package and its dependencies: +4. (Optional) Most nodes with visual input are set up to run with a default USB camera. If you want to use it install the corresponding package and its dependencies: ```shell cd src git clone https://github.com/ros-drivers/usb_cam cd .. rosdep install --from-paths src/ --ignore-src ``` -6. Build the packages inside the workspace: +5. Build the packages inside the workspace: ```shell catkin_make ``` -7. Source the workspace: +6. Source the workspace: ```shell source devel/setup.bash ``` You are now ready to run an OpenDR ROS node, in this terminal but first the ROS master node needs to be running -8. Before continuing, you need to start the ROS master node by running: +7. Before continuing, you need to start the ROS master node by running: ```shell roscore & ``` From efcd727b4795c656f03062bc7f082e0f8a998e70 Mon Sep 17 00:00:00 2001 From: tsampazk <27914645+tsampazk@users.noreply.github.com> Date: Thu, 8 Dec 2022 14:49:37 +0200 Subject: [PATCH 57/57] Added some additional required ros packages and made ROS version a variable --- bin/install.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/bin/install.sh b/bin/install.sh index d6a75fe65a..ae14e6d335 100755 --- a/bin/install.sh +++ b/bin/install.sh @@ -40,7 +40,7 @@ make install_compilation_dependencies make install_runtime_dependencies # Install additional ROS packages -sudo apt-get install ros-noetic-vision-msgs ros-noetic-audio-common-msgs +sudo apt-get install ros-$ROS_DISTRO-vision-msgs ros-$ROS_DISTRO-geometry-msgs ros-$ROS_DISTRO-sensor-msgs ros-$ROS_DISTRO-audio-common-msgss # If working on GPU install GPU dependencies as needed if [[ "${OPENDR_DEVICE}" == "gpu" ]]; then