Skip to content

Commit

Permalink
ROS2 Node for EfficientLPS (opendr-eu#404)
Browse files Browse the repository at this point in the history
* feat(efficientLPS): Re-implemented the ROS2 nodes for EfficientLPS and PCL2 publisher

* fix(efficientLPS): Small argument variable naming fix

* docs(efficientLPS): Doc update for ROS2 node list

* feat(efficientLPS): ROS2 Node added test data support and updated docs

* docs(efficientLPS): Fix small mistakes in the docs

* docs(efficientLPS): Fix small typos in the docs

* style(efficientLPS): Fix pep8

* style(efficientLPS): Fix pep8

* docs(efficientLPS): Fix license dates

* refactor(efficientLPS): Rename the output topic of EfficientLPS ROS2 node

* feat(efficientLPS): Small function/library call fixes on ROS2 nodes

* Added missing dependency for new efficient lps node

* feat(Continual-SLAM): Make pointcloud2 publisher publish a proper error message

---------

Co-authored-by: Niclas <49001036+vniclas@users.noreply.github.com>
Co-authored-by: tsampazk <27914645+tsampazk@users.noreply.github.com>
Co-authored-by: ad-daniel <44834743+ad-daniel@users.noreply.github.com>
  • Loading branch information
4 people authored and Luca Marchionni committed Feb 16, 2023
1 parent 3182a65 commit e6359ed
Show file tree
Hide file tree
Showing 7 changed files with 427 additions and 12 deletions.
2 changes: 1 addition & 1 deletion bin/install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ fi
# ROS2 package dependencies
if [[ ${ROS_DISTRO} == "foxy" || ${ROS_DISTRO} == "humble" ]]; then
echo "Installing ROS2 dependencies"
sudo apt-get -y install python3-lark ros-$ROS_DISTRO-usb-cam ros-$ROS_DISTRO-webots-ros2 python3-colcon-common-extensions ros-$ROS_DISTRO-vision-msgs
sudo apt-get -y install python3-lark ros-$ROS_DISTRO-usb-cam ros-$ROS_DISTRO-webots-ros2 python3-colcon-common-extensions ros-$ROS_DISTRO-vision-msgs ros-$ROS_DISTRO-sensor-msgs-py
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ros/$ROS_DISTRO/lib/controller
cd $OPENDR_HOME/projects/opendr_ws_2/
git clone --depth 1 --branch ros2 https://github.com/ros-drivers/audio_common src/audio_common
Expand Down
3 changes: 2 additions & 1 deletion projects/opendr_ws_2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ Currently, apart from tools, opendr_ws_2 contains the following ROS2 nodes (cate
6. [2D Object Detection](src/opendr_perception/README.md#2d-object-detection-ros2-nodes)
7. [2D Single Object Tracking](src/opendr_perception/README.md#2d-single-object-tracking-ros2-node)
8. [2D Object Tracking](src/opendr_perception/README.md#2d-object-tracking-ros2-nodes)
9. [Panoptic Segmentation](src/opendr_perception/README.md#panoptic-segmentation-ros2-node)
9. [Vision Based Panoptic Segmentation](src/opendr_perception/README.md#vision-based-panoptic-segmentation-ros2-node)
10. [Semantic Segmentation](src/opendr_perception/README.md#semantic-segmentation-ros2-node)
11. [Binary High Resolution](src/opendr_perception/README.md#binary-high-resolution-ros2-node)
12. [Image-based Facial Emotion Estimation](src/opendr_perception/README.md#image-based-facial-emotion-estimation-ros2-node)
Expand All @@ -86,5 +86,6 @@ Currently, apart from tools, opendr_ws_2 contains the following ROS2 nodes (cate
## Point cloud input
1. [3D Object Detection Voxel](src/opendr_perception/README.md#3d-object-detection-voxel-ros2-node)
2. [3D Object Tracking AB3DMOT](src/opendr_perception/README.md#3d-object-tracking-ab3dmot-ros2-node)
3. [LiDAR Based Panoptic Segmentation](src/opendr_perception/README.md#lidar-based-panoptic-segmentation-ros2-node)
## Biosignal input
1. [Heart Anomaly Detection](src/opendr_perception/README.md#heart-anomaly-detection-ros2-node)
78 changes: 77 additions & 1 deletion projects/opendr_ws_2/src/opendr_bridge/opendr_bridge/bridge.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

import struct
import numpy as np
from opendr.engine.data import Image, PointCloud, Timeseries
from opendr.engine.target import (
Expand All @@ -20,7 +21,10 @@
)
from cv_bridge import CvBridge
from std_msgs.msg import String, ColorRGBA, Header
from sensor_msgs.msg import Image as ImageMsg, PointCloud as PointCloudMsg, ChannelFloat32 as ChannelFloat32Msg
from sensor_msgs.msg import (
Image as ImageMsg, PointCloud as PointCloudMsg, ChannelFloat32 as ChannelFloat32Msg,
PointField as PointFieldMsg, PointCloud2 as PointCloud2Msg
)
from vision_msgs.msg import (
Detection2DArray, Detection2D, BoundingBox2D, ObjectHypothesisWithPose,
Detection3D, Detection3DArray, BoundingBox3D as BoundingBox3DMsg,
Expand All @@ -33,6 +37,7 @@
Point
)
from opendr_interface.msg import OpenDRPose2D, OpenDRPose2DKeypoint, OpenDRPose3D, OpenDRPose3DKeypoint
from sensor_msgs_py import point_cloud2 as pc2


class ROS2Bridge:
Expand Down Expand Up @@ -362,6 +367,77 @@ def to_ros_point_cloud(self, point_cloud, time_stamp):

return ros_point_cloud

def from_ros_point_cloud2(self, point_cloud: PointCloud2Msg):
"""
Converts a ROS PointCloud2 message into an OpenDR PointCloud
:param point_cloud: ROS PointCloud2 to be converted
:type point_cloud: sensor_msgs.msg.PointCloud2
:return: OpenDR PointCloud
:rtype: engine.data.PointCloud
"""

points = pc2.read_points_list(point_cloud, field_names=[f.name for f in point_cloud.fields])
result = PointCloud(points)

return result

def to_ros_point_cloud2(self, point_cloud: PointCloud, timestamp: str, channels: str=None):

"""
Converts an OpenDR PointCloud message into a ROS PointCloud2
:param: OpenDR PointCloud
:type: engine.data.PointCloud
:param: channels to be included in the PointCloud2 message. Available channels names are ["rgb", "rgba"]
:type: str
:return message: ROS PointCloud2
:rtype message: sensor_msgs.msg.PointCloud2
"""

header = Header()
header.stamp = timestamp
header.frame_id = "base_link"

channel_count = point_cloud.data.shape[-1] - 3

fields = [PointFieldMsg(name="x", offset=0, datatype=PointFieldMsg.FLOAT32, count=1),
PointFieldMsg(name="y", offset=4, datatype=PointFieldMsg.FLOAT32, count=1),
PointFieldMsg(name="z", offset=8, datatype=PointFieldMsg.FLOAT32, count=1)]
if channels == 'rgb' or channels == 'rgba':
fields.append(PointFieldMsg(name="rgba", offset=12, datatype=PointFieldMsg.UINT32, count=1))
else:
for i in range(channel_count):
fields.append(PointFieldMsg(name="channel_" + str(i),
offset=12 + i * 4,
datatype=PointFieldMsg.FLOAT32,
count=1))

points = []

for point in point_cloud.data:
pt = [point[0], point[1], point[2]]
for channel in range(channel_count):
if channels == 'rgb' or channels == 'rgba':
r = int(point[3])
g = int(point[4])
b = int(point[5])

if channels == 'rgb':
a = 255
else:
a = int(point[6])

rgba = struct.unpack('I', struct.pack('BBBB', b, g, r, a))[0]
pt.append(rgba)
break

else:
pt.append(point[3 + channel])
points.append(pt)

ros_point_cloud2 = pc2.create_cloud(header, fields, points)

return ros_point_cloud2

def from_ros_boxes_3d(self, ros_boxes_3d):
"""
Converts a ROS2 Detection3DArray message into an OpenDR BoundingBox3D object.
Expand Down
68 changes: 60 additions & 8 deletions projects/opendr_ws_2/src/opendr_perception/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -390,11 +390,13 @@ whose documentation can be found here: [Deep Sort docs](../../../../docs/referen
An [image dataset node](#image-dataset-ros2-node) is also provided to be used along these nodes.
Make sure to change the default input topic of the tracking node if you are not using the USB cam node.

### Panoptic Segmentation ROS2 Node
### Vision Based Panoptic Segmentation ROS2 Node

You can find the panoptic segmentation ROS2 node python script [here](./opendr_perception/panoptic_segmentation_efficient_ps_node.py) to inspect the code and modify it as you wish to fit your needs.
A ROS node for performing panoptic segmentation on a specified RGB image stream using the [EfficientPS](../../../../src/opendr/perception/panoptic_segmentation/README.md#efficientps-efficient-panoptic-segmentation) network.

You can find the vision based panoptic segmentation (EfficientPS) ROS node python script [here](./opendr_perception/panoptic_segmentation_efficient_ps_node.py) to inspect the code and modify it as you wish to fit your needs.
The node makes use of the toolkit's [panoptic segmentation tool](../../../../src/opendr/perception/panoptic_segmentation/efficient_ps/efficient_ps_learner.py) whose documentation can be found [here](../../../../docs/reference/efficient-ps.md)
and additional information about Efficient PS [here](../../../../src/opendr/perception/panoptic_segmentation/README.md).
and additional information about EfficientPS [here](../../../../src/opendr/perception/panoptic_segmentation/README.md).
#### Instructions for basic usage:
Expand All @@ -407,12 +409,12 @@ and additional information about Efficient PS [here](../../../../src/opendr/perc
```
The following optional arguments are available:
- `-h or --help`: show a help message and exit
- `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC` : listen to RGB images on this topic (default=`/image_raw`)
- `-oh --output_heatmap_topic OUTPUT_HEATMAP_TOPIC`: publish the semantic and instance maps on this topic as `OUTPUT_HEATMAP_TOPIC/semantic` and `OUTPUT_HEATMAP_TOPIC/instance`, `None` to stop the node from publishing on this topic (default=`/opendr/panoptic`)
- `-ov --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: publish the panoptic segmentation map as an RGB image on this topic or a more detailed overview if using the `--detailed_visualization` flag, `None` to stop the node from publishing on this topic (default=`opendr/panoptic/rgb_visualization`)
- `--detailed_visualization`: generate a combined overview of the input RGB image and the semantic, instance, and panoptic segmentation maps and publish it on `OUTPUT_RGB_IMAGE_TOPIC` (default=deactivated)
- `-h, --help`: show a help message and exit
- `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC` : listen to RGB images on this topic (default=`/usb_cam/image_raw`)
- `--checkpoint CHECKPOINT` : download pretrained models [cityscapes, kitti] or load from the provided path (default=`cityscapes`)
- `-oh or --output_heatmap_topic OUTPUT_RGB_IMAGE_TOPIC`: publish the semantic and instance maps on this topic as `OUTPUT_HEATMAP_TOPIC/semantic` and `OUTPUT_HEATMAP_TOPIC/instance` (default=`/opendr/panoptic`)
- `-ov or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: publish the panoptic segmentation map as an RGB image on `VISUALIZATION_TOPIC` or a more detailed overview if using the `--detailed_visualization` flag (default=`/opendr/panoptic/rgb_visualization`)
- `--detailed_visualization`: generate a combined overview of the input RGB image and the semantic, instance, and panoptic segmentation maps and publish it on `OUTPUT_RGB_IMAGE_TOPIC` (default=deactivated)
3. Default output topics:
- Output images: `/opendr/panoptic/semantic`, `/opendr/panoptic/instance`, `/opendr/panoptic/rgb_visualization`
Expand Down Expand Up @@ -855,6 +857,37 @@ whose documentation can be found [here](../../../../docs/reference/object-tracki

For viewing the output, refer to the [notes above.](#notes)


### LiDAR Based Panoptic Segmentation ROS2 Node
A ROS node for performing panoptic segmentation on a specified pointcloud stream using the [EfficientLPS](../../../../src/opendr/perception/panoptic_segmentation/README.md#efficientlps-efficient-lidar-panoptic-segmentation) network.

You can find the lidar based panoptic segmentation ROS node python script [here](./opendr_perception/panoptic_segmentation_efficient_lps_node.py). You can further also find the point cloud 2 publisher ROS node python script [here](./opendr_perception/point_cloud_2_publisher_node.py), and more explanation [here](#point-cloud-2-publisher-ros-node).You can inspect the codes and make changes as you wish to fit your needs.
The EfficientLPS node makes use of the toolkit's [panoptic segmentation tool](../../../../src/opendr/perception/panoptic_segmentation/efficient_lps/efficient_lps_learner.py) whose documentation can be found [here](../../../../docs/reference/efficient-lps.md)
and additional information about EfficientLPS [here](../../../../src/opendr/perception/panoptic_segmentation/README.md).
#### Instructions for basic usage:
1. First one needs to download SemanticKITTI dataset into POINTCLOUD_LOCATION as it is described in the [Panoptic Segmentation Datasets](../../../../src/opendr/perception/panoptic_segmentation/datasets/README.md). Then, once the SPLIT type is specified (train, test or "valid", default "valid"), the point **Point Cloud 2 Publisher** can be started using the following line:
- ```shell
ros2 run opendr_perception point_cloud_2_publisher -d POINTCLOUD_LOCATION -s SPLIT
```
2. After starting the **PointCloud2 Publisher**, one can start **EfficientLPS Node** using the following line:
- ```shell
ros2 run opendr_perception panoptic_segmentation_efficient_lps /opendr/dataset_point_cloud2
```
The following optional arguments are available:
- `-h, --help`: show a help message and exit
- `-i or --input_point_cloud_2_topic INPUT_POINTCLOUD2_TOPIC` : Point Cloud 2 topic provided by either a point_cloud_2_publisher_node or any other 3D Point Cloud 2 Node (default=`/opendr/dataset_point_cloud2`)
- `-c or --checkpoint CHECKPOINT` : download pretrained models [semantickitti] or load from the provided path (default=`semantickitti`)
- `-o or --output_heatmap_pointcloud_topic OUTPUT_HEATMAP_POINTCLOUD_TOPIC`: publish the 3D heatmap pointcloud on `OUTPUT_HEATMAP_POINTCLOUD_TOPIC` (default=`/opendr/panoptic`)
3. Default output topics:
- Detection messages: `/opendr/panoptic`
----
## Biosignal input
Expand Down Expand Up @@ -936,3 +969,22 @@ The following optional arguments are available:
- `-f or --fps FPS`: data fps (default=`10`)
- `-d or --dataset_path DATASET_PATH`: path to a dataset, if it does not exist, nano KITTI dataset will be downloaded there (default=`/KITTI/opendr_nano_kitti`)
- `-ks or --kitti_subsets_path KITTI_SUBSETS_PATH`: path to KITTI subsets, used only if a KITTI dataset is downloaded (default=`../../src/opendr/perception/object_detection_3d/datasets/nano_kitti_subsets`)

### Point Cloud 2 Publisher ROS2 Node

The point cloud 2 dataset publisher, publishes point cloud 2 messages from pre-downloaded dataset SemanticKITTI. It is currently being used by the ROS node [LiDAR Based Panoptic Segmentation ROS Node](#lidar-based-panoptic-segmentation-ros-node).

You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements,
to use alongside other nodes and datasets.
You can inspect [the node](./opendr_perception/point_cloud_2_publisher_node.py) and modify it to your needs for other point cloud datasets.

To get a point cloud from a dataset on the disk, you can start a `point_cloud_2_publisher_node.py` node as:
```shell
ros2 run opendr_perception point_cloud_2_publisher
```
The following optional arguments are available:
- `-h or --help`: show a help message and exit
- `-d or --dataset_path DATASET_PATH`: path of the SemanticKITTI dataset to publish the point cloud 2 message (default=`./datasets/semantickitti`)
- `-s or --split SPLIT`: split of the dataset to use, only (train, valid, test) are available (default=`valid`)
- `-o or --output_point_cloud_2_topic OUTPUT_POINT_CLOUD_2_TOPIC`: topic name to publish the data (default=`/opendr/dataset_point_cloud2`)
- `-t or --test_data`: Add this argument if you want to only test this node with the test data available in our server
Loading

0 comments on commit e6359ed

Please sign in to comment.