## Table of contents
- * [ROS1 and ROS2 legacy](#legacy)
+ * [ROS1 and ROS2 legacy](#ros1-and-ros2-legacy)
* [Installation](#installation)
* [Usage](#usage)
- * [Starting the camera node](#start-camera-node)
+ * [Starting the camera node](#start-the-camera-node)
+ * [Camera name and namespace](#camera-name-and-camera-namespace)
* [Parameters](#parameters)
- * [ROS2-vs-Optical Coordination Systems](#coordination)
- * [TF from coordinate A to coordinate B](#tfs)
- * [Extrinsics from sensor A to sensor B](#extrinsics)
- * [Topics](#topics)
- * [RGBD Topic](#rgbd)
- * [Metadata Topic](#metadata)
- * [Post-Processing Filters](#filters)
- * [Available Services](#services)
- * [Efficient intra-process communication](#intra-process)
+ * [ROS2-vs-Optical Coordination Systems](#ros2robot-vs-opticalcamera-coordination-systems)
+ * [TF from coordinate A to coordinate B](#tf-from-coordinate-a-to-coordinate-b)
+ * [Extrinsics from sensor A to sensor B](#extrinsics-from-sensor-a-to-sensor-b)
+ * [Topics](#published-topics)
+ * [RGBD Topic](#rgbd-topic)
+ * [Metadata Topic](#metadata-topic)
+ * [Post-Processing Filters](#post-processing-filters)
+ * [Available Services](#available-services)
+ * [Efficient intra-process communication](#efficient-intra-process-communication)
* [Contributing](CONTRIBUTING.md)
* [License](LICENSE)
-
- Legacy
-
+# ROS1 and ROS2 Legacy
@@ -79,9 +78,7 @@
-
- Installation
-
+# Installation
@@ -142,8 +139,8 @@
- Install dependencies
```bash
sudo apt-get install python3-rosdep -y
- sudo rosdep init # "sudo rosdep init --include-eol-distros" for Eloquent and earlier
- rosdep update # "sudo rosdep update --include-eol-distros" for Eloquent and earlier
+ sudo rosdep init # "sudo rosdep init --include-eol-distros" for Foxy and earlier
+ rosdep update # "sudo rosdep update --include-eol-distros" for Foxy and earlier
rosdep install -i --from-path src --rosdistro $ROS_DISTRO --skip-keys=librealsense2 -y
```
@@ -164,13 +161,9 @@
-
- Usage
-
+# Usage
-
- Start the camera node
-
+## Start the camera node
#### with ros2 run:
ros2 run realsense2_camera realsense2_camera_node
@@ -183,16 +176,77 @@
-
- Parameters
-
+## Camera Name And Camera Namespace
+
+User can set the camera name and camera namespace, to distinguish between cameras and platforms, which helps identifying the right nodes and topics to work with.
+
+### Example
+- If user have multiple cameras (might be of the same model) and multiple robots then user can choose to launch/run his nodes on this way.
+- For the first robot and first camera he will run/launch it with these parameters:
+ - camera_namespace:
+ - robot1
+ - camera_name
+ - D455_1
+
+ - With ros2 launch (via command line or by editing these two parameters in the launch file):
+
+ ```ros2 launch realsense2_camera rs_launch.py camera_namespace:=robot1 camera_name:=D455_1```
+
+ - With ros2 run (using remapping mechanisim [Reference](https://docs.ros.org/en/humble/How-To-Guides/Node-arguments.html)):
+
+ ```ros2 run realsense2_camera realsense2_camera_node --ros-args -r __node:=D455_1 -r __ns:=robot1```
+
+ - Result
+ ```
+ > ros2 node list
+ /robot1/D455_1
+
+ > ros2 topic list
+ /robot1/D455_1/color/camera_info
+ /robot1/D455_1/color/image_raw
+ /robot1/D455_1/color/metadata
+ /robot1/D455_1/depth/camera_info
+ /robot1/D455_1/depth/image_rect_raw
+ /robot1/D455_1/depth/metadata
+ /robot1/D455_1/extrinsics/depth_to_color
+ /robot1/D455_1/imu
+
+ > ros2 service list
+ /robot1/D455_1/device_info
+ ```
+
+### Default behavior if non of these parameters are given:
+ - camera_namespace:=camera
+ - camera_name:=camera
+
+```
+> ros2 node list
+/camera/camera
+
+> ros2 topic list
+/camera/camera/color/camera_info
+/camera/camera/color/image_raw
+/camera/camera/color/metadata
+/camera/camera/depth/camera_info
+/camera/camera/depth/image_rect_raw
+/camera/camera/depth/metadata
+/camera/camera/extrinsics/depth_to_color
+/camera/camera/imu
+
+> ros2 service list
+/camera/camera/device_info
+```
+
+
+
+## Parameters
### Available Parameters:
- For the entire list of parameters type `ros2 param list`.
- For reading a parameter value use `ros2 param get `
- - For example: `ros2 param get /camera/camera depth_module.emitter_on_off`
+ - For example: `ros2 param get /camera/camera depth_module.emitter_enabled`
- For setting a new value for a parameter use `ros2 param set `
- - For example: `ros2 param set /camera/camera depth_module.emitter_on_off true`
+ - For example: `ros2 param set /camera/camera depth_module.emitter_enabled 1`
#### Parameters that can be modified during runtime:
- All of the filters and sensors inner parameters.
@@ -200,13 +254,13 @@
- They have, at least, the **profile** parameter.
- The profile parameter is a string of the following format: \X\X\ (The dividing character can be X, x or ",". Spaces are ignored.)
- For example: ```depth_module.profile:=640x480x30 rgb_camera.profile:=1280x720x30```
- - Since infra, infra1, infra2, fisheye, fisheye1, fisheye2 and depth are all streams of the depth_module, their width, height and fps are defined by the same param **depth_module.profile**
+ - Since infra, infra1, infra2 and depth are all streams of the depth_module, their width, height and fps are defined by the same param **depth_module.profile**
- If the specified combination of parameters is not available by the device, the default or previously set configuration will be used.
- Run ```ros2 param describe ``` to get the list of supported profiles.
- Note: Should re-enable the stream for the change to take effect.
- ****_format**
- This parameter is a string used to select the stream format.
- - can be any of *infra, infra1, infra2, color, depth, fisheye, fisheye1, fisheye2*.
+ - can be any of *infra, infra1, infra2, color, depth*.
- For example: ```depth_module.depth_format:=Z16 depth_module.infra1_format:=y8 rgb_camera.color_format:=RGB8```
- This parameter supports both lower case and upper case letters.
- If the specified parameter is not available by the stream, the default or previously set configuration will be used.
@@ -217,14 +271,14 @@
- Run ```rs-enumerate-devices``` command to know the list of profiles supported by the connected sensors.
- **enable_****:
- Choose whether to enable a specified stream or not. Default is true for images and false for orientation streams.
- - can be any of *infra, infra1, infra2, color, depth, fisheye, fisheye1, fisheye2, gyro, accel, pose*.
+ - can be any of *infra, infra1, infra2, color, depth, gyro, accel*.
- For example: ```enable_infra1:=true enable_color:=false```
- **enable_sync**:
- gathers closest frames of different sensors, infra red, color and depth, to be sent with the same timetag.
- This happens automatically when such filters as pointcloud are enabled.
- ****_qos**:
- Sets the QoS by which the topic is published.
- - can be any of *infra, infra1, infra2, color, depth, fisheye, fisheye1, fisheye2, gyro, accel, pose*.
+ - can be any of *infra, infra1, infra2, color, depth, gyro, accel*.
- Available values are the following strings: `SYSTEM_DEFAULT`, `DEFAULT`, `PARAMETER_EVENTS`, `SERVICES_DEFAULT`, `PARAMETERS`, `SENSOR_DATA`.
- For example: ```depth_qos:=SENSOR_DATA```
- Reference: [ROS2 QoS profiles formal documentation](https://docs.ros.org/en/rolling/Concepts/About-Quality-of-Service-Settings.html#qos-profiles)
@@ -235,6 +289,22 @@
- This param also depends on **publish_tf** param
- If **publish_tf:=false**, then no TFs will be published, even if **tf_publish_rate** is >0.0 Hz
- If **publish_tf:=true** and **tf_publish_rate** set to >0.0 Hz, then dynamic TFs will be published at the specified rate
+- **unite_imu_method**:
+ - For the D400 cameras with built in IMU components, below 2 unrelated streams (each with it's own frequency) will be created:
+ - *gyro* - which shows angular velocity
+ - *accel* - which shows linear acceleration.
+ - Both streams will publish data to its corresponding topics:
+ - '/camera/camera/gyro/sample' & '/camera/camera/accel/sample'
+ - Though both topics are of same message type 'sensor_msgs::Imu', only their relevant fields are filled out.
+ - A new topic called **imu** will be created, when both *accel* and *gyro* streams are enabled and the param *unite_imu_method* set to > 0.
+ - Data from both accel and gyro are combined and published to this topic
+ - All the fields of the Imu message are filled out.
+ - It will be published at the rate of the gyro.
+ - `unite_imu_method` param supports below values:
+ - 0 -> **none**: no imu topic
+ - 1 -> **copy**: Every gyro message will be attached by the last accel message.
+ - 2 -> **linear_interpolation**: Every gyro message will be attached by an accel message which is interpolated to gyro's timestamp.
+ - Note: When the param *unite_imu_method* is dynamically updated, re-enable either gyro or accel stream for the change to take effect.
#### Parameters that cannot be changed in runtime:
- **serial_no**:
@@ -268,21 +338,7 @@
- On occasions the device was not closed properly and due to firmware issues needs to reset.
- If set to true, the device will reset prior to usage.
- For example: `initial_reset:=true`
-- ****_frame_id**, ****_optical_frame_id**, **aligned_depth_to_**_frame_id**: Specify the different frame_id for the different frames. Especially important when using multiple cameras.
- **base_frame_id**: defines the frame_id all static transformations refers to.
-- **odom_frame_id**: defines the origin coordinate system in ROS convention (X-Forward, Y-Left, Z-Up). pose topic defines the pose relative to that system.
-
-- **unite_imu_method**:
- - D400 cameras have built in IMU components which produce 2 unrelated streams, each with it's own frequency:
- - *gyro* - which shows angular velocity
- - *accel* which shows linear acceleration.
- - By default, 2 corresponding topics are available, each with only the relevant fields of the message sensor_msgs::Imu are filled out.
- - Setting *unite_imu_method* creates a new topic, *imu*, that replaces the default *gyro* and *accel* topics.
- - The *imu* topic is published at the rate of the gyro.
- - All the fields of the Imu message under the *imu* topic are filled out.
- - `unite_imu_method` parameter supported values are [0-2] meaning: [0 -> None, 1 -> Copy, 2 -> Linear_ interpolation] when:
- - **linear_interpolation**: Every gyro message is attached by the an accel message interpolated to the gyro's timestamp.
- - **copy**: Every gyro message is attached by the last accel message.
- **clip_distance**:
- Remove from the depth image all values above a given value (meters). Disable by giving negative value (default)
- For example: `clip_distance:=1.5`
@@ -299,13 +355,9 @@
- 0 or negative values mean no diagnostics topic is published. Defaults to 0.
The `/diagnostics` topic includes information regarding the device temperatures and actual frequency of the enabled streams.
-- **publish_odom_tf**: If True (default) publish TF from odom_frame to pose_frame.
-
-
- ROS2(Robot) vs Optical(Camera) Coordination Systems:
-
+## ROS2(Robot) vs Optical(Camera) Coordination Systems:
- Point Of View:
- Imagine we are standing behind of the camera, and looking forward.
@@ -321,9 +373,7 @@ The `/diagnostics` topic includes information regarding the device temperatures
-
- TF from coordinate A to coordinate B:
-
+## TF from coordinate A to coordinate B:
- TF msg expresses a transform from coordinate frame "header.frame_id" (source) to the coordinate frame child_frame_id (destination) [Reference](http://docs.ros.org/en/noetic/api/geometry_msgs/html/msg/Transform.html)
- In RealSense cameras, the origin point (0,0,0) is taken from the left IR (infra1) position and named as "camera_link" frame
@@ -335,10 +385,7 @@ The `/diagnostics` topic includes information regarding the device temperatures
-
- Extrinsics from sensor A to sensor B:
-
-
+## Extrinsics from sensor A to sensor B:
- Extrinsic from sensor A to sensor B means the position and orientation of sensor A relative to sensor B.
- Imagine that B is the origin (0,0,0), then the Extrensics(A->B) describes where is sensor A relative to sensor B.
@@ -349,7 +396,7 @@ The `/diagnostics` topic includes information regarding the device temperatures
![d435i](https://user-images.githubusercontent.com/99127997/230220297-e392f0fc-63bf-4bab-8001-af1ddf0ed00e.png)
```
-administrator@perclnx466 ~/ros2_humble $ ros2 topic echo /camera/extrinsics/depth_to_color
+administrator@perclnx466 ~/ros2_humble $ ros2 topic echo /camera/camera/extrinsics/depth_to_color
rotation:
- 0.9999583959579468
- 0.008895332925021648
@@ -373,23 +420,21 @@ translation:
-
- Published Topics
-
+## Published Topics
The published topics differ according to the device and parameters.
After running the above command with D435i attached, the following list of topics will be available (This is a partial list. For full one type `ros2 topic list`):
-- /camera/aligned_depth_to_color/camera_info
-- /camera/aligned_depth_to_color/image_raw
-- /camera/color/camera_info
-- /camera/color/image_raw
-- /camera/color/metadata
-- /camera/depth/camera_info
-- /camera/depth/color/points
-- /camera/depth/image_rect_raw
-- /camera/depth/metadata
-- /camera/extrinsics/depth_to_color
-- /camera/imu
+- /camera/camera/aligned_depth_to_color/camera_info
+- /camera/camera/aligned_depth_to_color/image_raw
+- /camera/camera/color/camera_info
+- /camera/camera/color/image_raw
+- /camera/camera/color/metadata
+- /camera/camera/depth/camera_info
+- /camera/camera/depth/color/points
+- /camera/camera/depth/image_rect_raw
+- /camera/camera/depth/metadata
+- /camera/camera/extrinsics/depth_to_color
+- /camera/camera/imu
- /diagnostics
- /parameter_events
- /rosout
@@ -406,20 +451,18 @@ ros2 param set /camera/camera enable_gyro true
```
Enabling stream adds matching topics. For instance, enabling the gyro and accel streams adds the following topics:
-- /camera/accel/imu_info
-- /camera/accel/metadata
-- /camera/accel/sample
-- /camera/extrinsics/depth_to_accel
-- /camera/extrinsics/depth_to_gyro
-- /camera/gyro/imu_info
-- /camera/gyro/metadata
-- /camera/gyro/sample
+- /camera/camera/accel/imu_info
+- /camera/camera/accel/metadata
+- /camera/camera/accel/sample
+- /camera/camera/extrinsics/depth_to_accel
+- /camera/camera/extrinsics/depth_to_gyro
+- /camera/camera/gyro/imu_info
+- /camera/camera/gyro/metadata
+- /camera/camera/gyro/sample
-
- RGBD Topic
-
+## RGBD Topic
RGBD new topic, publishing [RGB + Depth] in the same message (see RGBD.msg for reference). For now, works only with depth aligned to color images, as color and depth images are synchronized by frame time tag.
@@ -439,32 +482,31 @@ ros2 launch realsense2_camera rs_launch.py enable_rgbd:=true enable_sync:=true a
-
- Metadata topic
-
+## Metadata topic
The metadata messages store the camera's available metadata in a *json* format. To learn more, a dedicated script for echoing a metadata topic in runtime is attached. For instance, use the following command to echo the camera/depth/metadata topic:
```
-python3 src/realsense-ros/realsense2_camera/scripts/echo_metadada.py /camera/depth/metadata
+python3 src/realsense-ros/realsense2_camera/scripts/echo_metadada.py /camera/camera/depth/metadata
```
-
- Post-Processing Filters
-
-
+## Post-Processing Filters
+
The following post processing filters are available:
- - ```align_depth```: If enabled, will publish the depth image aligned to the color image on the topic `/camera/aligned_depth_to_color/image_raw`.
+ - ```align_depth```: If enabled, will publish the depth image aligned to the color image on the topic `/camera/camera/aligned_depth_to_color/image_raw`.
- The pointcloud, if created, will be based on the aligned depth image.
- ```colorizer```: will color the depth image. On the depth topic an RGB image will be published, instead of the 16bit depth values .
- - ```pointcloud```: will add a pointcloud topic `/camera/depth/color/points`.
+ - ```pointcloud```: will add a pointcloud topic `/camera/camera/depth/color/points`.
* The texture of the pointcloud can be modified using the `pointcloud.stream_filter` parameter.
* The depth FOV and the texture FOV are not similar. By default, pointcloud is limited to the section of depth containing the texture. You can have a full depth to pointcloud, coloring the regions beyond the texture with zeros, by setting `pointcloud.allow_no_texture_points` to true.
* pointcloud is of an unordered format by default. This can be changed by setting `pointcloud.ordered_pc` to true.
- ```hdr_merge```: Allows depth image to be created by merging the information from 2 consecutive frames, taken with different exposure and gain values.
+ - `depth_module.hdr_enabled`: to enable/disable HDR
- The way to set exposure and gain values for each sequence in runtime is by first selecting the sequence id, using the `depth_module.sequence_id` parameter and then modifying the `depth_module.gain`, and `depth_module.exposure`.
- - To view the effect on the infrared image for each sequence id use the `sequence_id_filter.sequence_id` parameter.
+ - From FW versions 5.14.x.x and above, if HDR is enabled, the preset configs (like exposure, gain, etc.,) cannot be updated.
+ - Disable the HDR first using `depth_module.hdr_enabled` parameter and then, update the required presets.
+ - To view the effect on the infrared image for each sequence id use the `filter_by_sequence_id.sequence_id` parameter.
- To initialize these parameters in start time use the following parameters:
- `depth_module.exposure.1`
- `depth_module.gain.1`
@@ -483,17 +525,13 @@ Each of the above filters have it's own parameters, following the naming convent
-
- Available services
-
+## Available services
-- device_info : retrieve information about the device - serial_number, firmware_version etc. Type `ros2 interface show realsense2_camera_msgs/srv/DeviceInfo` for the full list. Call example: `ros2 service call /camera/device_info realsense2_camera_msgs/srv/DeviceInfo`
+- device_info : retrieve information about the device - serial_number, firmware_version etc. Type `ros2 interface show realsense2_camera_msgs/srv/DeviceInfo` for the full list. Call example: `ros2 service call /camera/camera/device_info realsense2_camera_msgs/srv/DeviceInfo`
-
- Efficient intra-process communication:
-
+## Efficient intra-process communication:
Our ROS2 Wrapper node supports zero-copy communications if loaded in the same process as a subscriber node. This can reduce copy times on image/pointcloud topics, especially with big frame resolutions and high FPS.
diff --git a/realsense2_camera/CMakeLists.txt b/realsense2_camera/CMakeLists.txt
index bcaaa74e72..fb267f590d 100644
--- a/realsense2_camera/CMakeLists.txt
+++ b/realsense2_camera/CMakeLists.txt
@@ -143,34 +143,18 @@ set(SOURCES
if(NOT DEFINED ENV{ROS_DISTRO})
message(FATAL_ERROR "ROS_DISTRO is not defined." )
endif()
-if("$ENV{ROS_DISTRO}" STREQUAL "dashing")
- message(STATUS "Build for ROS2 Dashing")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DDASHING")
- set(SOURCES "${SOURCES}" src/ros_param_backend_dashing.cpp)
-elseif("$ENV{ROS_DISTRO}" STREQUAL "eloquent")
- message(STATUS "Build for ROS2 eloquent")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DELOQUENT")
- set(SOURCES "${SOURCES}" src/ros_param_backend_foxy.cpp)
-elseif("$ENV{ROS_DISTRO}" STREQUAL "foxy")
- message(STATUS "Build for ROS2 Foxy")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DFOXY")
- set(SOURCES "${SOURCES}" src/ros_param_backend_foxy.cpp)
-elseif("$ENV{ROS_DISTRO}" STREQUAL "galactic")
- message(STATUS "Build for ROS2 Galactic")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DGALACTIC")
- set(SOURCES "${SOURCES}" src/ros_param_backend_foxy.cpp)
-elseif("$ENV{ROS_DISTRO}" STREQUAL "humble")
+if("$ENV{ROS_DISTRO}" STREQUAL "humble")
message(STATUS "Build for ROS2 Humble")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DHUMBLE")
- set(SOURCES "${SOURCES}" src/ros_param_backend_foxy.cpp)
+ set(SOURCES "${SOURCES}" src/ros_param_backend.cpp)
elseif("$ENV{ROS_DISTRO}" STREQUAL "iron")
message(STATUS "Build for ROS2 Iron")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DIRON")
- set(SOURCES "${SOURCES}" src/ros_param_backend_foxy.cpp)
+ set(SOURCES "${SOURCES}" src/ros_param_backend.cpp)
elseif("$ENV{ROS_DISTRO}" STREQUAL "rolling")
message(STATUS "Build for ROS2 Rolling")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DROLLING")
- set(SOURCES "${SOURCES}" src/ros_param_backend_rolling.cpp)
+ set(SOURCES "${SOURCES}" src/ros_param_backend.cpp)
else()
message(FATAL_ERROR "Unsupported ROS Distribution: " "$ENV{ROS_DISTRO}")
endif()
@@ -271,6 +255,12 @@ install(DIRECTORY
DESTINATION share/${PROJECT_NAME}
)
+# Install example files
+install(DIRECTORY
+ examples
+ DESTINATION share/${PROJECT_NAME}
+)
+
# Test
if(BUILD_TESTING)
find_package(ament_cmake_gtest REQUIRED)
@@ -313,6 +303,36 @@ if(BUILD_TESTING)
)
endforeach()
endforeach()
+
+ unset(_pytest_folders)
+
+ set(rs_query_cmd "rs-enumerate-devices -s")
+ execute_process(COMMAND bash -c ${rs_query_cmd}
+ WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
+ RESULT_VARIABLE rs_result
+ OUTPUT_VARIABLE RS_DEVICE_INFO)
+ message(STATUS "rs_device_info:")
+ message(STATUS "${RS_DEVICE_INFO}")
+ if((RS_DEVICE_INFO MATCHES "D455") OR (RS_DEVICE_INFO MATCHES "D415") OR (RS_DEVICE_INFO MATCHES "D435"))
+ message(STATUS "D455 device found")
+ set(_pytest_live_folders
+ test/live_camera
+ )
+ endif()
+
+ foreach(test_folder ${_pytest_live_folders})
+ file(GLOB files "${test_folder}/test_*.py")
+ foreach(file ${files})
+
+ get_filename_component(_test_name ${file} NAME_WE)
+ ament_add_pytest_test(${_test_name} ${file}
+ APPEND_ENV PYTHONPATH=${CMAKE_CURRENT_BINARY_DIR}:${CMAKE_SOURCE_DIR}/test/utils:${CMAKE_SOURCE_DIR}/launch:${CMAKE_SOURCE_DIR}/scripts
+ TIMEOUT 500
+ WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
+ )
+ endforeach()
+ endforeach()
+
endif()
# Ament exports
diff --git a/realsense2_camera/examples/align_depth/README.md b/realsense2_camera/examples/align_depth/README.md
new file mode 100644
index 0000000000..3b8f13b826
--- /dev/null
+++ b/realsense2_camera/examples/align_depth/README.md
@@ -0,0 +1,12 @@
+# Align Depth to Color
+This example shows how to start the camera node and align depth stream to color stream.
+```
+ros2 launch realsense2_camera rs_align_depth_launch.py
+```
+
+The aligned image will be published to the topic "/aligned_depth_to_color/image_raw"
+
+Also, align depth to color can enabled by following cmd:
+```
+ros2 launch realsense2_camera rs_launch.py align_depth.enable:=true
+```
diff --git a/realsense2_camera/examples/align_depth/rs_align_depth_launch.py b/realsense2_camera/examples/align_depth/rs_align_depth_launch.py
new file mode 100644
index 0000000000..cc579eada8
--- /dev/null
+++ b/realsense2_camera/examples/align_depth/rs_align_depth_launch.py
@@ -0,0 +1,56 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# DESCRIPTION #
+# ----------- #
+# Use this launch file to launch a device and align depth to color.
+# The Parameters available for definition in the command line for the camera are described in rs_launch.configurable_parameters
+# command line example:
+# ros2 launch realsense2_camera rs_align_depth_launch.py
+
+"""Launch realsense2_camera node."""
+from launch import LaunchDescription
+import launch_ros.actions
+from launch.actions import OpaqueFunction
+from launch.substitutions import LaunchConfiguration, ThisLaunchFileDir
+import sys
+import pathlib
+sys.path.append(str(pathlib.Path(__file__).parent.absolute()))
+import os
+from ament_index_python.packages import get_package_share_directory
+sys.path.append(os.path.join(get_package_share_directory('realsense2_camera'), 'launch'))
+import rs_launch
+
+local_parameters = [{'name': 'camera_name', 'default': 'camera', 'description': 'camera unique name'},
+ {'name': 'camera_namespace', 'default': 'camera', 'description': 'camera namespace'},
+ {'name': 'enable_color', 'default': 'true', 'description': 'enable color stream'},
+ {'name': 'enable_depth', 'default': 'true', 'description': 'enable depth stream'},
+ {'name': 'align_depth.enable', 'default': 'true', 'description': 'enable align depth filter'},
+ {'name': 'enable_sync', 'default': 'true', 'description': 'enable sync mode'},
+ ]
+
+def set_configurable_parameters(local_params):
+ return dict([(param['name'], LaunchConfiguration(param['name'])) for param in local_params])
+
+
+def generate_launch_description():
+ params = rs_launch.configurable_parameters
+ return LaunchDescription(
+ rs_launch.declare_configurable_parameters(local_parameters) +
+ rs_launch.declare_configurable_parameters(params) +
+ [
+ OpaqueFunction(function=rs_launch.launch_setup,
+ kwargs = {'params' : set_configurable_parameters(params)}
+ )
+ ])
diff --git a/realsense2_camera/examples/dual_camera/README.md b/realsense2_camera/examples/dual_camera/README.md
new file mode 100644
index 0000000000..03a7cfa08f
--- /dev/null
+++ b/realsense2_camera/examples/dual_camera/README.md
@@ -0,0 +1,123 @@
+# Launching Dual RS ROS2 nodes
+The following example lanches two RS ROS2 nodes.
+```
+ros2 launch realsense2_camera rs_dual_camera_launch.py serial_no1:= serial_no2:=
+```
+
+## Example:
+Let's say the serial numbers of two RS cameras are 207322251310 and 234422060144.
+```
+ros2 launch realsense2_camera rs_dual_camera_launch.py serial_no1:="'207322251310'" serial_no2:="'234422060144'"
+```
+or
+```
+ros2 launch realsense2_camera rs_dual_camera_launch.py serial_no1:=_207322251310 serial_no2:=_234422060144
+```
+
+## How to know the serial number?
+Method 1: Using the rs-enumerate-devices tool
+```
+rs-enumerate-devices | grep "Serial Number"
+```
+
+Method 2: Connect single camera and run
+```
+ros2 launch realsense2_camera rs_launch.py
+```
+and look for the serial number in the log printed to screen under "[INFO][...] Device Serial No:".
+
+# Using Multiple RS camera by launching each in differnet terminals
+Make sure you set a different name and namespace for each camera.
+
+Terminal 1:
+```
+ros2 launch realsense2_camera rs_launch.py serial_no:="'207322251310'" camera_name:='camera1' camera_namespace:='camera1'
+```
+Terminal 2:
+```
+ros2 launch realsense2_camera rs_launch.py serial_no:="'234422060144'" camera_name:='camera2' camera_namespace:='camera2'
+```
+
+# Multiple cameras showing a semi-unified pointcloud
+The D430 series of RealSense cameras use stereo based algorithm to calculate depth. This mean, a couple of cameras can operate on the same scene. For the purpose of this demonstration, let's say 2 cameras can be coupled to look at the same scene from 2 different points of view. See image:
+
+![multi_cameras](https://user-images.githubusercontent.com/127019120/268692789-1b3d5d8b-a41f-4a97-995d-81d44b4bcacb.jpg)
+
+The schematic settings could be described as:
+X--------------------------------->cam_2
+| (70 cm)
+|
+|
+| (60 cm)
+|
+|
+/
+cam_1
+
+The cameras have no data regarding their relative position. Thats up to a third party program to set. To simplify things, the coordinate system of cam_1 can be considered as the refernce coordinate system for the whole scene.
+
+The estimated translation of cam_2 from cam_1 is 70(cm) on X-axis and 60(cm) on Y-axis. Also, the estimated yaw angle of cam_2 relative to cam_1 as 90(degrees) clockwise. These are the initial parameters to be set for setting the transformation between the 2 cameras as follows:
+
+```
+ros2 launch realsense2_camera rs_dual_camera_launch.py serial_no1:=_207322251310 serial_no2:=_234422060144 tf.translation.x:=0.7 tf.translation.y:=0.6 tf.translation.z:=0.0 tf.rotation.yaw:=-90.0 tf.rotation.pitch:=0.0 tf.rotation.roll:=0.0
+```
+
+If the unified pointcloud result is not good, follow the below steps to fine-tune the calibaration.
+
+## Visualizing the pointclouds and fine-tune the camera calibration
+Launch 2 cameras in separate terminals:
+
+**Terminal 1:**
+```
+ros2 launch realsense2_camera rs_launch.py serial_no:="'207322251310'" camera_name:='camera1' camera_namespace:='camera1'
+```
+**Terminal 2:**
+```
+ros2 launch realsense2_camera rs_launch.py serial_no:="'234422060144'" camera_name:='camera2' camera_namespace:='camera2'
+```
+**Terminal 3:**
+```
+rviz2
+```
+Open rviz and set 'Fixed Frame' to camera1_link
+Add Pointcloud2-> By topic -> /camera1/camera1/depth/color/points
+Add Pointcloud2 -> By topic -> /camera2/camera2/depth/color/points
+
+**Terminal 4:**
+Run the 'set_cams_transforms.py' tool. It can be used to fine-tune the calibaration.
+```
+python src/realsense-ros/realsense2_camera/scripts/set_cams_transforms.py camera1_link camera2_link 0.7 0.6 0 -90 0 0
+```
+
+**Instructions printed by the tool:**
+```
+Using default file /home/user_name/ros2_ws/src/realsense-ros/realsense2_camera/scripts/_set_cams_info_file.txt
+
+Use given initial values.
+
+Press the following keys to change mode: x, y, z, (a)zimuth, (p)itch, (r)oll
+
+For each mode, press 6 to increase by step and 4 to decrease
+
+Press + to multiply step by 2 or - to divide
+
+Press Q to quit
+```
+
+Note that the tool prints the path of the current configuration file. It saves its last configuration automatically, all the time, to be used on the next run.
+
+After a lot of fiddling around, unified pointcloud looked better with the following calibaration:
+```
+x = 0.75
+y = 0.575
+z = 0
+azimuth = -91.25
+pitch = 0.75
+roll = 0
+```
+
+Now, use the above results in the launch file:
+```
+ros2 launch realsense2_camera rs_dual_camera_launch.py serial_no1:=_207322251310 serial_no2:=_234422060144 tf.translation.x:=0.75 tf.translation.y:=0.575 tf.translation.z:=0.0 tf.rotation.yaw:=-91.25 tf.rotation.pitch:=0.75 tf.rotation.roll:=0.0
+```
+
diff --git a/realsense2_camera/examples/dual_camera/rs_dual_camera_launch.py b/realsense2_camera/examples/dual_camera/rs_dual_camera_launch.py
new file mode 100644
index 0000000000..81643d2c7e
--- /dev/null
+++ b/realsense2_camera/examples/dual_camera/rs_dual_camera_launch.py
@@ -0,0 +1,109 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# DESCRIPTION #
+# ----------- #
+# Use this launch file to launch 2 devices.
+# The Parameters available for definition in the command line for each camera are described in rs_launch.configurable_parameters
+# For each device, the parameter name was changed to include an index.
+# For example: to set camera_name for device1 set parameter camera_name1.
+# command line example:
+# ros2 launch realsense2_camera rs_dual_camera_launch.py serial_no1:= serial_no2:=
+
+"""Launch realsense2_camera node."""
+import copy
+from launch import LaunchDescription, LaunchContext
+import launch_ros.actions
+from launch.actions import OpaqueFunction
+from launch.substitutions import LaunchConfiguration, ThisLaunchFileDir
+import sys
+import pathlib
+sys.path.append(str(pathlib.Path(__file__).parent.absolute()))
+import os
+from ament_index_python.packages import get_package_share_directory
+sys.path.append(os.path.join(get_package_share_directory('realsense2_camera'), 'launch'))
+import rs_launch
+
+local_parameters = [{'name': 'camera_name1', 'default': 'camera1', 'description': 'camera unique name'},
+ {'name': 'camera_name2', 'default': 'camera2', 'description': 'camera unique name'},
+ {'name': 'camera_namespace1', 'default': 'camera1', 'description': 'camera1 namespace'},
+ {'name': 'camera_namespace2', 'default': 'camera2', 'description': 'camera2 namespace'},
+ {'name': 'enable_color1', 'default': 'true', 'description': 'enable color stream'},
+ {'name': 'enable_color2', 'default': 'true', 'description': 'enable color stream'},
+ {'name': 'enable_depth1', 'default': 'true', 'description': 'enable depth stream'},
+ {'name': 'enable_depth2', 'default': 'true', 'description': 'enable depth stream'},
+ {'name': 'pointcloud.enable1', 'default': 'true', 'description': 'enable pointcloud'},
+ {'name': 'pointcloud.enable2', 'default': 'true', 'description': 'enable pointcloud'},
+ {'name': 'spatial_filter.enable1', 'default': 'true', 'description': 'enable_spatial_filter'},
+ {'name': 'spatial_filter.enable2', 'default': 'true', 'description': 'enable_spatial_filter'},
+ {'name': 'temporal_filter.enable1', 'default': 'true', 'description': 'enable_temporal_filter'},
+ {'name': 'temporal_filter.enable2', 'default': 'true', 'description': 'enable_temporal_filter'},
+ {'name': 'tf.translation.x', 'default': '0.0', 'description': 'x'},
+ {'name': 'tf.translation.y', 'default': '0.0', 'description': 'y'},
+ {'name': 'tf.translation.z', 'default': '0.0', 'description': 'z'},
+ {'name': 'tf.rotation.yaw', 'default': '0.0', 'description': 'yaw'},
+ {'name': 'tf.rotation.pitch', 'default': '0.0', 'description': 'pitch'},
+ {'name': 'tf.rotation.roll', 'default': '0.0', 'description': 'roll'},
+ ]
+
+def set_configurable_parameters(local_params):
+ return dict([(param['original_name'], LaunchConfiguration(param['name'])) for param in local_params])
+
+def duplicate_params(general_params, posix):
+ local_params = copy.deepcopy(general_params)
+ for param in local_params:
+ param['original_name'] = param['name']
+ param['name'] += posix
+ return local_params
+
+def launch_static_transform_publisher_node(context : LaunchContext):
+ # Static transformation from camera1 to camera2
+ node = launch_ros.actions.Node(
+ name = "my_static_transform_publisher",
+ package = "tf2_ros",
+ executable = "static_transform_publisher",
+ arguments = [context.launch_configurations['tf.translation.x'],
+ context.launch_configurations['tf.translation.y'],
+ context.launch_configurations['tf.translation.z'],
+ context.launch_configurations['tf.rotation.yaw'],
+ context.launch_configurations['tf.rotation.pitch'],
+ context.launch_configurations['tf.rotation.roll'],
+ context.launch_configurations['camera_name1'] + "_link",
+ context.launch_configurations['camera_name2'] + "_link"]
+ )
+ return [node]
+
+def generate_launch_description():
+ params1 = duplicate_params(rs_launch.configurable_parameters, '1')
+ params2 = duplicate_params(rs_launch.configurable_parameters, '2')
+ return LaunchDescription(
+ rs_launch.declare_configurable_parameters(local_parameters) +
+ rs_launch.declare_configurable_parameters(params1) +
+ rs_launch.declare_configurable_parameters(params2) +
+ [
+ OpaqueFunction(function=rs_launch.launch_setup,
+ kwargs = {'params' : set_configurable_parameters(params1),
+ 'param_name_suffix': '1'}),
+ OpaqueFunction(function=rs_launch.launch_setup,
+ kwargs = {'params' : set_configurable_parameters(params2),
+ 'param_name_suffix': '2'}),
+ OpaqueFunction(function=launch_static_transform_publisher_node),
+ launch_ros.actions.Node(
+ package='rviz2',
+ namespace='',
+ executable='rviz2',
+ name='rviz2',
+ arguments=['-d', [ThisLaunchFileDir(), '/rviz/dual_camera_pointcloud.rviz']]
+ )
+ ])
diff --git a/realsense2_camera/examples/dual_camera/rviz/dual_camera_pointcloud.rviz b/realsense2_camera/examples/dual_camera/rviz/dual_camera_pointcloud.rviz
new file mode 100644
index 0000000000..1a4c0caa11
--- /dev/null
+++ b/realsense2_camera/examples/dual_camera/rviz/dual_camera_pointcloud.rviz
@@ -0,0 +1,194 @@
+Panels:
+ - Class: rviz_common/Displays
+ Help Height: 0
+ Name: Displays
+ Property Tree Widget:
+ Expanded:
+ - /Global Options1
+ - /Status1
+ - /Grid1
+ - /PointCloud21
+ - /PointCloud22
+ Splitter Ratio: 0.5
+ Tree Height: 865
+ - Class: rviz_common/Selection
+ Name: Selection
+ - Class: rviz_common/Tool Properties
+ Expanded:
+ - /Publish Point1
+ Name: Tool Properties
+ Splitter Ratio: 0.5886790156364441
+ - Class: rviz_common/Views
+ Expanded:
+ - /Current View1
+ Name: Views
+ Splitter Ratio: 0.5
+Visualization Manager:
+ Class: ""
+ Displays:
+ - Alpha: 0.5
+ Cell Size: 1
+ Class: rviz_default_plugins/Grid
+ Color: 160; 160; 164
+ Enabled: true
+ Line Style:
+ Line Width: 0.029999999329447746
+ Value: Lines
+ Name: Grid
+ Normal Cell Count: 0
+ Offset:
+ X: 0
+ Y: 0
+ Z: 0
+ Plane: XY
+ Plane Cell Count: 10
+ Reference Frame:
+ Value: true
+ - Alpha: 1
+ Autocompute Intensity Bounds: true
+ Autocompute Value Bounds:
+ Max Value: 10
+ Min Value: -10
+ Value: true
+ Axis: Z
+ Channel Name: intensity
+ Class: rviz_default_plugins/PointCloud2
+ Color: 255; 255; 255
+ Color Transformer: RGB8
+ Decay Time: 0
+ Enabled: true
+ Invert Rainbow: false
+ Max Color: 255; 255; 255
+ Max Intensity: 4096
+ Min Color: 0; 0; 0
+ Min Intensity: 0
+ Name: PointCloud2
+ Position Transformer: XYZ
+ Selectable: true
+ Size (Pixels): 3
+ Size (m): 0.009999999776482582
+ Style: Flat Squares
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ Filter size: 10
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera1/camera1/depth/color/points
+ Use Fixed Frame: true
+ Use rainbow: true
+ Value: true
+ - Alpha: 1
+ Autocompute Intensity Bounds: true
+ Autocompute Value Bounds:
+ Max Value: 10
+ Min Value: -10
+ Value: true
+ Axis: Z
+ Channel Name: intensity
+ Class: rviz_default_plugins/PointCloud2
+ Color: 255; 255; 255
+ Color Transformer: RGB8
+ Decay Time: 0
+ Enabled: true
+ Invert Rainbow: false
+ Max Color: 255; 255; 255
+ Max Intensity: 4096
+ Min Color: 0; 0; 0
+ Min Intensity: 0
+ Name: PointCloud2
+ Position Transformer: XYZ
+ Selectable: true
+ Size (Pixels): 3
+ Size (m): 0.009999999776482582
+ Style: Flat Squares
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ Filter size: 10
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera2/camera2/depth/color/points
+ Use Fixed Frame: true
+ Use rainbow: true
+ Value: true
+ Enabled: true
+ Global Options:
+ Background Color: 48; 48; 48
+ Fixed Frame: camera1_link
+ Frame Rate: 30
+ Name: root
+ Tools:
+ - Class: rviz_default_plugins/MoveCamera
+ - Class: rviz_default_plugins/Select
+ - Class: rviz_default_plugins/FocusCamera
+ - Class: rviz_default_plugins/Measure
+ Line color: 128; 128; 0
+ - Class: rviz_default_plugins/SetInitialPose
+ Covariance x: 0.25
+ Covariance y: 0.25
+ Covariance yaw: 0.06853891909122467
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /initialpose
+ - Class: rviz_default_plugins/SetGoal
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /move_base_simple/goal
+ - Class: rviz_default_plugins/PublishPoint
+ Single click: true
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /clicked_point
+ Transformation:
+ Current:
+ Class: rviz_default_plugins/TF
+ Value: true
+ Views:
+ Current:
+ Class: rviz_default_plugins/Orbit
+ Distance: 8.93685531616211
+ Enable Stereo Rendering:
+ Stereo Eye Separation: 0.05999999865889549
+ Stereo Focal Distance: 1
+ Swap Stereo Eyes: false
+ Value: false
+ Focal Point:
+ X: -0.18814913928508759
+ Y: -0.17941315472126007
+ Z: 0.14549313485622406
+ Focal Shape Fixed Size: true
+ Focal Shape Size: 0.05000000074505806
+ Invert Z Axis: false
+ Name: Current View
+ Near Clip Distance: 0.009999999776482582
+ Pitch: -1.5697963237762451
+ Target Frame:
+ Value: Orbit (rviz_default_plugins)
+ Yaw: 4.730405330657959
+ Saved: ~
+Window Geometry:
+ Displays:
+ collapsed: false
+ Height: 1016
+ Hide Left Dock: false
+ Hide Right Dock: true
+ QMainWindow State: 000000ff00000000fd0000000400000000000001560000039efc0200000010fb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d0000039e000000c900fffffffb0000000a0049006d006100670065000000015b0000009a0000000000000000fb0000000a0049006d0061006700650000000197000000d60000000000000000fb0000000a0049006d0061006700650000000203000000f20000000000000000fb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000a0049006d00610067006500000002a8000001330000000000000000fb0000000a0049006d00610067006501000001940000005d0000000000000000fb0000000a0049006d00610067006501000001f70000007a0000000000000000fb0000000a0049006d00610067006501000002770000009d0000000000000000fb0000000a0049006d006100670065010000031a000000ca0000000000000000000000010000010f000001effc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073000000003d000001ef000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b20000000000000000000000020000073d000000a9fc0100000002fb0000000a0049006d00610067006503000001c5000000bb000001f8000001b0fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000004420000003efc0100000002fb0000000800540069006d00650100000000000004420000000000000000fb0000000800540069006d00650100000000000004500000000000000000000005da0000039e00000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
+ Selection:
+ collapsed: false
+ Tool Properties:
+ collapsed: false
+ Views:
+ collapsed: true
+ Width: 1846
+ X: 74
+ Y: 27
diff --git a/realsense2_camera/examples/launch_from_rosbag/README.md b/realsense2_camera/examples/launch_from_rosbag/README.md
new file mode 100644
index 0000000000..5bfbb017f5
--- /dev/null
+++ b/realsense2_camera/examples/launch_from_rosbag/README.md
@@ -0,0 +1,17 @@
+# Launching RS ROS2 node from rosbag File
+The following example allows streaming a rosbag file, saved by Intel RealSense Viewer, instead of streaming live with a camera. It can be used for testing and repetition of the same sequence.
+```
+ros2 launch realsense2_camera rs_launch_from_rosbag.py
+```
+By default, the 'rs_launch_from_rosbag.py' launch file uses the "/rosbag/D435i_Depth_and_IMU_Stands_still.bag" rosbag file.
+
+User can also provide a different rosbag file through cmd line as follows:
+```
+ros2 launch realsense2_camera rs_launch_from_rosbag.py rosbag_filename:="/full/path/to/rosbag/file"
+```
+or
+```
+ros2 launch realsense2_camera rs_launch.py rosbag_filename:="/full/path/to/rosbag/file"
+```
+
+Check-out [sample-recordings](https://github.com/IntelRealSense/librealsense/blob/master/doc/sample-data.md) for a few recorded samples.
\ No newline at end of file
diff --git a/realsense2_camera/examples/launch_from_rosbag/rs_launch_from_rosbag.py b/realsense2_camera/examples/launch_from_rosbag/rs_launch_from_rosbag.py
new file mode 100644
index 0000000000..a0d9620cad
--- /dev/null
+++ b/realsense2_camera/examples/launch_from_rosbag/rs_launch_from_rosbag.py
@@ -0,0 +1,56 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# DESCRIPTION #
+# ----------- #
+# Use this launch file to launch a device from rosbag file.
+# The Parameters available for definition in the command line for the camera are described in rs_launch.configurable_parameters
+# command line example:
+# ros2 launch realsense2_camera rs_launch_from_rosbag.py
+
+"""Launch realsense2_camera node."""
+from launch import LaunchDescription
+import launch_ros.actions
+from launch.actions import OpaqueFunction
+from launch.substitutions import LaunchConfiguration, ThisLaunchFileDir
+import sys
+import pathlib
+sys.path.append(str(pathlib.Path(__file__).parent.absolute()))
+import os
+from ament_index_python.packages import get_package_share_directory
+sys.path.append(os.path.join(get_package_share_directory('realsense2_camera'), 'launch'))
+import rs_launch
+
+local_parameters = [{'name': 'camera_name', 'default': 'camera', 'description': 'camera unique name'},
+ {'name': 'camera_namespace', 'default': 'camera', 'description': 'camera namespace'},
+ {'name': 'enable_depth', 'default': 'true', 'description': 'enable depth stream'},
+ {'name': 'enable_gyro', 'default': 'true', 'description': "'enable gyro stream'"},
+ {'name': 'enable_accel', 'default': 'true', 'description': "'enable accel stream'"},
+ {'name': 'rosbag_filename', 'default': [ThisLaunchFileDir(), "/rosbag/D435i_Depth_and_IMU_Stands_still.bag"], 'description': 'A realsense bagfile to run from as a device'},
+ ]
+
+def set_configurable_parameters(local_params):
+ return dict([(param['name'], LaunchConfiguration(param['name'])) for param in local_params])
+
+
+def generate_launch_description():
+ params = rs_launch.configurable_parameters
+ return LaunchDescription(
+ rs_launch.declare_configurable_parameters(local_parameters) +
+ rs_launch.declare_configurable_parameters(params) +
+ [
+ OpaqueFunction(function=rs_launch.launch_setup,
+ kwargs = {'params' : set_configurable_parameters(params)}
+ )
+ ])
diff --git a/realsense2_camera/examples/launch_params_from_file/README.md b/realsense2_camera/examples/launch_params_from_file/README.md
new file mode 100644
index 0000000000..ce9980aa5b
--- /dev/null
+++ b/realsense2_camera/examples/launch_params_from_file/README.md
@@ -0,0 +1,33 @@
+# Get RS ROS2 node params from YAML file
+The following example gets the RS ROS2 node params from YAML file.
+```
+ros2 launch realsense2_camera rs_launch_get_params_from_yaml.py
+```
+
+By default, 'rs_launch_get_params_from_yaml.py' launch file uses the "/config/config.yaml" YAML file.
+
+User can provide a different YAML file through cmd line as follows:
+```
+ros2 launch realsense2_camera rs_launch_get_params_from_yaml.py config_file:="/full/path/to/config/file"
+```
+or
+```
+ros2 launch realsense2_camera rs_launch.py config_file:="/full/path/to/config/file"
+```
+
+## Syntax for defining params in YAML file
+```
+param1: value
+param2: value
+```
+
+Example:
+```
+enable_color: true
+rgb_camera.profile: 1280x720x15
+enable_depth: true
+align_depth.enable: true
+enable_sync: true
+publish_tf: true
+tf_publish_rate: 1.0
+```
\ No newline at end of file
diff --git a/realsense2_camera/examples/launch_params_from_file/config/config.yaml b/realsense2_camera/examples/launch_params_from_file/config/config.yaml
new file mode 100644
index 0000000000..d15a19a0fe
--- /dev/null
+++ b/realsense2_camera/examples/launch_params_from_file/config/config.yaml
@@ -0,0 +1,7 @@
+enable_color: true
+rgb_camera.profile: 1280x720x15
+enable_depth: true
+align_depth.enable: true
+enable_sync: true
+publish_tf: true
+tf_publish_rate: 1.0
diff --git a/realsense2_camera/examples/launch_params_from_file/rs_launch_get_params_from_yaml.py b/realsense2_camera/examples/launch_params_from_file/rs_launch_get_params_from_yaml.py
new file mode 100644
index 0000000000..c0ebd7bfbd
--- /dev/null
+++ b/realsense2_camera/examples/launch_params_from_file/rs_launch_get_params_from_yaml.py
@@ -0,0 +1,53 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# DESCRIPTION #
+# ----------- #
+# Use this launch file to launch a device and get the params from a YAML file.
+# The Parameters available for definition in the command line for the camera are described in rs_launch.configurable_parameters
+# command line example:
+# ros2 launch realsense2_camera rs_launch_get_params_from_yaml.py
+
+"""Launch realsense2_camera node."""
+from launch import LaunchDescription
+import launch_ros.actions
+from launch.actions import OpaqueFunction
+from launch.substitutions import LaunchConfiguration, ThisLaunchFileDir
+import sys
+import pathlib
+sys.path.append(str(pathlib.Path(__file__).parent.absolute()))
+import os
+from ament_index_python.packages import get_package_share_directory
+sys.path.append(os.path.join(get_package_share_directory('realsense2_camera'), 'launch'))
+import rs_launch
+
+local_parameters = [{'name': 'camera_name', 'default': 'camera', 'description': 'camera unique name'},
+ {'name': 'camera_namespace', 'default': 'camera', 'description': 'camera namespace'},
+ {'name': 'config_file', 'default': [ThisLaunchFileDir(), "/config/config.yaml"], 'description': 'yaml config file'},
+ ]
+
+def set_configurable_parameters(local_params):
+ return dict([(param['name'], LaunchConfiguration(param['name'])) for param in local_params])
+
+
+def generate_launch_description():
+ params = rs_launch.configurable_parameters
+ return LaunchDescription(
+ rs_launch.declare_configurable_parameters(local_parameters) +
+ rs_launch.declare_configurable_parameters(params) +
+ [
+ OpaqueFunction(function=rs_launch.launch_setup,
+ kwargs = {'params' : set_configurable_parameters(params)}
+ )
+ ])
diff --git a/realsense2_camera/examples/pointcloud/README.md b/realsense2_camera/examples/pointcloud/README.md
new file mode 100644
index 0000000000..1285e0a843
--- /dev/null
+++ b/realsense2_camera/examples/pointcloud/README.md
@@ -0,0 +1,17 @@
+# PointCloud Visualization
+The following example starts the camera and simultaneously opens RViz GUI to visualize the published pointcloud.
+```
+ros2 launch realsense2_camera rs_pointcloud_launch.py
+```
+
+Alternatively, start the camera terminal 1:
+```
+ros2 launch realsense2_camera rs_launch.py pointcloud.enable:=true
+```
+and in terminal 2 open rviz to visualize pointcloud.
+
+# PointCloud with different coordinate systems
+This example opens rviz and shows the camera model with different coordinate systems and the pointcloud, so it presents the pointcloud and the camera together.
+```
+ros2 launch realsense2_camera rs_d455_pointcloud_launch.py
+```
\ No newline at end of file
diff --git a/realsense2_camera/examples/pointcloud/rs_d405_pointcloud_launch.py b/realsense2_camera/examples/pointcloud/rs_d405_pointcloud_launch.py
new file mode 100644
index 0000000000..e4db15f0bc
--- /dev/null
+++ b/realsense2_camera/examples/pointcloud/rs_d405_pointcloud_launch.py
@@ -0,0 +1,92 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# DESCRIPTION #
+# ----------- #
+# Use this launch file to launch a device and visualize pointcloud along with the camera model D405.
+# The Parameters available for definition in the command line for the camera are described in rs_launch.configurable_parameters
+# command line example:
+# ros2 launch realsense2_camera rs_d405_pointcloud_launch.py
+
+"""Launch realsense2_camera node."""
+from launch import LaunchDescription
+import launch_ros.actions
+from launch.actions import OpaqueFunction
+from launch.substitutions import LaunchConfiguration, ThisLaunchFileDir
+import os
+import sys
+import pathlib
+import xacro
+import tempfile
+from ament_index_python.packages import get_package_share_directory
+sys.path.append(str(pathlib.Path(__file__).parent.absolute()))
+sys.path.append(os.path.join(get_package_share_directory('realsense2_camera'), 'launch'))
+import rs_launch
+
+local_parameters = [{'name': 'camera_name', 'default': 'camera', 'description': 'camera unique name'},
+ {'name': 'camera_namespace', 'default': 'camera', 'description': 'camera namespace'},
+ {'name': 'device_type', 'default': "d405", 'description': 'choose device by type'},
+ {'name': 'enable_color', 'default': 'true', 'description': 'enable color stream'},
+ {'name': 'enable_depth', 'default': 'true', 'description': 'enable depth stream'},
+ {'name': 'pointcloud.enable', 'default': 'true', 'description': 'enable pointcloud'},
+ ]
+
+def to_urdf(xacro_path, parameters=None):
+ """Convert the given xacro file to URDF file.
+ * xacro_path -- the path to the xacro file
+ * parameters -- to be used when xacro file is parsed.
+ """
+ urdf_path = tempfile.mktemp(prefix="%s_" % os.path.basename(xacro_path))
+
+ # open and process file
+ doc = xacro.process_file(xacro_path, mappings=parameters)
+ # open the output file
+ out = xacro.open_output(urdf_path)
+ out.write(doc.toprettyxml(indent=' '))
+
+ return urdf_path
+
+def set_configurable_parameters(local_params):
+ return dict([(param['name'], LaunchConfiguration(param['name'])) for param in local_params])
+
+
+def generate_launch_description():
+ params = rs_launch.configurable_parameters
+ xacro_path = os.path.join(get_package_share_directory('realsense2_description'), 'urdf', 'test_d405_camera.urdf.xacro')
+ urdf = to_urdf(xacro_path, {'use_nominal_extrinsics': 'true', 'add_plug': 'true'})
+ return LaunchDescription(
+ rs_launch.declare_configurable_parameters(local_parameters) +
+ rs_launch.declare_configurable_parameters(params) +
+ [
+ OpaqueFunction(function=rs_launch.launch_setup,
+ kwargs = {'params' : set_configurable_parameters(params)}
+ ),
+ launch_ros.actions.Node(
+ package='rviz2',
+ namespace='',
+ executable='rviz2',
+ name='rviz2',
+ arguments=['-d', [ThisLaunchFileDir(), '/rviz/urdf_pointcloud.rviz']],
+ output='screen',
+ parameters=[{'use_sim_time': False}]
+ ),
+ launch_ros.actions.Node(
+ name='model_node',
+ package='robot_state_publisher',
+ executable='robot_state_publisher',
+ namespace='',
+ output='screen',
+ arguments=[urdf]
+ )
+ ])
diff --git a/realsense2_camera/examples/pointcloud/rs_d455_pointcloud_launch.py b/realsense2_camera/examples/pointcloud/rs_d455_pointcloud_launch.py
new file mode 100644
index 0000000000..85ee0ff48d
--- /dev/null
+++ b/realsense2_camera/examples/pointcloud/rs_d455_pointcloud_launch.py
@@ -0,0 +1,92 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# DESCRIPTION #
+# ----------- #
+# Use this launch file to launch a device and visualize pointcloud along with the camera model D455.
+# The Parameters available for definition in the command line for the camera are described in rs_launch.configurable_parameters
+# command line example:
+# ros2 launch realsense2_camera rs_d455_pointcloud_launch.py
+
+"""Launch realsense2_camera node."""
+from launch import LaunchDescription
+import launch_ros.actions
+from launch.actions import OpaqueFunction
+from launch.substitutions import LaunchConfiguration, ThisLaunchFileDir
+import os
+import sys
+import pathlib
+import xacro
+import tempfile
+from ament_index_python.packages import get_package_share_directory
+sys.path.append(str(pathlib.Path(__file__).parent.absolute()))
+sys.path.append(os.path.join(get_package_share_directory('realsense2_camera'), 'launch'))
+import rs_launch
+
+local_parameters = [{'name': 'camera_name', 'default': 'camera', 'description': 'camera unique name'},
+ {'name': 'camera_namespace', 'default': 'camera', 'description': 'camera namespace'},
+ {'name': 'device_type', 'default': "d455", 'description': 'choose device by type'},
+ {'name': 'enable_color', 'default': 'true', 'description': 'enable color stream'},
+ {'name': 'enable_depth', 'default': 'true', 'description': 'enable depth stream'},
+ {'name': 'pointcloud.enable', 'default': 'true', 'description': 'enable pointcloud'},
+ ]
+
+def to_urdf(xacro_path, parameters=None):
+ """Convert the given xacro file to URDF file.
+ * xacro_path -- the path to the xacro file
+ * parameters -- to be used when xacro file is parsed.
+ """
+ urdf_path = tempfile.mktemp(prefix="%s_" % os.path.basename(xacro_path))
+
+ # open and process file
+ doc = xacro.process_file(xacro_path, mappings=parameters)
+ # open the output file
+ out = xacro.open_output(urdf_path)
+ out.write(doc.toprettyxml(indent=' '))
+
+ return urdf_path
+
+def set_configurable_parameters(local_params):
+ return dict([(param['name'], LaunchConfiguration(param['name'])) for param in local_params])
+
+
+def generate_launch_description():
+ params = rs_launch.configurable_parameters
+ xacro_path = os.path.join(get_package_share_directory('realsense2_description'), 'urdf', 'test_d455_camera.urdf.xacro')
+ urdf = to_urdf(xacro_path, {'use_nominal_extrinsics': 'true', 'add_plug': 'true'})
+ return LaunchDescription(
+ rs_launch.declare_configurable_parameters(local_parameters) +
+ rs_launch.declare_configurable_parameters(params) +
+ [
+ OpaqueFunction(function=rs_launch.launch_setup,
+ kwargs = {'params' : set_configurable_parameters(params)}
+ ),
+ launch_ros.actions.Node(
+ package='rviz2',
+ namespace='',
+ executable='rviz2',
+ name='rviz2',
+ arguments=['-d', [ThisLaunchFileDir(), '/rviz/urdf_pointcloud.rviz']],
+ output='screen',
+ parameters=[{'use_sim_time': False}]
+ ),
+ launch_ros.actions.Node(
+ name='model_node',
+ package='robot_state_publisher',
+ executable='robot_state_publisher',
+ namespace='',
+ output='screen',
+ arguments=[urdf]
+ )
+ ])
diff --git a/realsense2_camera/examples/pointcloud/rs_pointcloud_launch.py b/realsense2_camera/examples/pointcloud/rs_pointcloud_launch.py
new file mode 100644
index 0000000000..f0acfe80b2
--- /dev/null
+++ b/realsense2_camera/examples/pointcloud/rs_pointcloud_launch.py
@@ -0,0 +1,62 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# DESCRIPTION #
+# ----------- #
+# Use this launch file to launch a device and visualize pointcloud.
+# The Parameters available for definition in the command line for the camera are described in rs_launch.configurable_parameters
+# command line example:
+# ros2 launch realsense2_camera rs_pointcloud_launch.py
+
+"""Launch realsense2_camera node."""
+from launch import LaunchDescription
+import launch_ros.actions
+from launch.actions import OpaqueFunction
+from launch.substitutions import LaunchConfiguration, ThisLaunchFileDir
+import sys
+import pathlib
+sys.path.append(str(pathlib.Path(__file__).parent.absolute()))
+import os
+from ament_index_python.packages import get_package_share_directory
+sys.path.append(os.path.join(get_package_share_directory('realsense2_camera'), 'launch'))
+import rs_launch
+
+local_parameters = [{'name': 'camera_name', 'default': 'camera', 'description': 'camera unique name'},
+ {'name': 'camera_namespace', 'default': 'camera', 'description': 'camera namespace'},
+ {'name': 'enable_color', 'default': 'true', 'description': 'enable color stream'},
+ {'name': 'enable_depth', 'default': 'true', 'description': 'enable depth stream'},
+ {'name': 'pointcloud.enable', 'default': 'true', 'description': 'enable pointcloud'},
+ ]
+
+def set_configurable_parameters(local_params):
+ return dict([(param['name'], LaunchConfiguration(param['name'])) for param in local_params])
+
+
+def generate_launch_description():
+ params = rs_launch.configurable_parameters
+ return LaunchDescription(
+ rs_launch.declare_configurable_parameters(local_parameters) +
+ rs_launch.declare_configurable_parameters(params) +
+ [
+ OpaqueFunction(function=rs_launch.launch_setup,
+ kwargs = {'params' : set_configurable_parameters(params)}
+ ),
+ launch_ros.actions.Node(
+ package='rviz2',
+ namespace='',
+ executable='rviz2',
+ name='rviz2',
+ arguments=['-d', [ThisLaunchFileDir(), '/rviz/pointcloud.rviz']]
+ )
+ ])
diff --git a/realsense2_camera/examples/pointcloud/rviz/pointcloud.rviz b/realsense2_camera/examples/pointcloud/rviz/pointcloud.rviz
new file mode 100644
index 0000000000..bb8955146f
--- /dev/null
+++ b/realsense2_camera/examples/pointcloud/rviz/pointcloud.rviz
@@ -0,0 +1,189 @@
+Panels:
+ - Class: rviz_common/Displays
+ Help Height: 0
+ Name: Displays
+ Property Tree Widget:
+ Expanded:
+ - /Global Options1
+ - /Status1
+ - /Grid1
+ - /PointCloud21
+ Splitter Ratio: 0.5
+ Tree Height: 382
+ - Class: rviz_common/Selection
+ Name: Selection
+ - Class: rviz_common/Tool Properties
+ Expanded:
+ - /Publish Point1
+ Name: Tool Properties
+ Splitter Ratio: 0.5886790156364441
+ - Class: rviz_common/Views
+ Expanded:
+ - /Current View1
+ Name: Views
+ Splitter Ratio: 0.5
+Visualization Manager:
+ Class: ""
+ Displays:
+ - Alpha: 0.5
+ Cell Size: 1
+ Class: rviz_default_plugins/Grid
+ Color: 160; 160; 164
+ Enabled: true
+ Line Style:
+ Line Width: 0.029999999329447746
+ Value: Lines
+ Name: Grid
+ Normal Cell Count: 0
+ Offset:
+ X: 0
+ Y: 0
+ Z: 0
+ Plane: XY
+ Plane Cell Count: 10
+ Reference Frame:
+ Value: true
+ - Alpha: 1
+ Autocompute Intensity Bounds: true
+ Autocompute Value Bounds:
+ Max Value: 10
+ Min Value: -10
+ Value: true
+ Axis: Z
+ Channel Name: intensity
+ Class: rviz_default_plugins/PointCloud2
+ Color: 255; 255; 255
+ Color Transformer: RGB8
+ Decay Time: 0
+ Enabled: true
+ Invert Rainbow: false
+ Max Color: 255; 255; 255
+ Max Intensity: 4096
+ Min Color: 0; 0; 0
+ Min Intensity: 0
+ Name: PointCloud2
+ Position Transformer: XYZ
+ Selectable: true
+ Size (Pixels): 3
+ Size (m): 0.009999999776482582
+ Style: Flat Squares
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ Filter size: 10
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/depth/color/points
+ Use Fixed Frame: true
+ Use rainbow: true
+ Value: true
+ - Class: rviz_default_plugins/Image
+ Enabled: true
+ Max Value: 1
+ Median window: 5
+ Min Value: 0
+ Name: Image
+ Normalize Range: true
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/color/image_raw
+ Value: true
+ - Class: rviz_default_plugins/Image
+ Enabled: true
+ Max Value: 1
+ Median window: 5
+ Min Value: 0
+ Name: Image
+ Normalize Range: true
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/depth/image_rect_raw
+ Value: true
+ Enabled: true
+ Global Options:
+ Background Color: 48; 48; 48
+ Fixed Frame: camera_depth_optical_frame
+ Frame Rate: 30
+ Name: root
+ Tools:
+ - Class: rviz_default_plugins/MoveCamera
+ - Class: rviz_default_plugins/Select
+ - Class: rviz_default_plugins/FocusCamera
+ - Class: rviz_default_plugins/Measure
+ Line color: 128; 128; 0
+ - Class: rviz_default_plugins/SetInitialPose
+ Covariance x: 0.25
+ Covariance y: 0.25
+ Covariance yaw: 0.06853891909122467
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /initialpose
+ - Class: rviz_default_plugins/SetGoal
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /move_base_simple/goal
+ - Class: rviz_default_plugins/PublishPoint
+ Single click: true
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /clicked_point
+ Transformation:
+ Current:
+ Class: rviz_default_plugins/TF
+ Value: true
+ Views:
+ Current:
+ Class: rviz_default_plugins/Orbit
+ Distance: 1.0510121583938599
+ Enable Stereo Rendering:
+ Stereo Eye Separation: 0.05999999865889549
+ Stereo Focal Distance: 1
+ Swap Stereo Eyes: false
+ Value: false
+ Focal Point:
+ X: -0.18814913928508759
+ Y: -0.17941315472126007
+ Z: 0.14549313485622406
+ Focal Shape Fixed Size: true
+ Focal Shape Size: 0.05000000074505806
+ Invert Z Axis: false
+ Name: Current View
+ Near Clip Distance: 0.009999999776482582
+ Pitch: -1.5697963237762451
+ Target Frame:
+ Value: Orbit (rviz_default_plugins)
+ Yaw: 4.730405330657959
+ Saved: ~
+Window Geometry:
+ Displays:
+ collapsed: false
+ Height: 1016
+ Hide Left Dock: false
+ Hide Right Dock: true
+ Image:
+ collapsed: false
+ QMainWindow State: 000000ff00000000fd0000000400000000000001560000039efc0200000010fb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d000001bb000000c900fffffffb0000000a0049006d006100670065010000010c0000001c0000000000000000fb0000000a0049006d006100670065010000012e0000001f0000000000000000fb0000000a0049006d00610067006501000001530000001c0000000000000000fb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000a0049006d0061006700650100000175000000190000000000000000fb0000000a0049006d00610067006501000001fe000000cd0000002800fffffffb0000000a0049006d00610067006501000002d10000010a0000002800fffffffb0000000a0049006d0061006700650000000233000000b70000000000000000fb0000000a0049006d00610067006501000002b2000001290000000000000000000000010000010f000001effc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073000000003d000001ef000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b20000000000000000000000020000073d000000a9fc0100000002fb0000000a0049006d00610067006503000001c5000000bb000001f8000001b0fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000004420000003efc0100000002fb0000000800540069006d00650100000000000004420000000000000000fb0000000800540069006d00650100000000000004500000000000000000000005da0000039e00000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
+ Selection:
+ collapsed: false
+ Tool Properties:
+ collapsed: false
+ Views:
+ collapsed: true
+ Width: 1846
+ X: 74
+ Y: 27
diff --git a/realsense2_camera/examples/pointcloud/rviz/urdf_pointcloud.rviz b/realsense2_camera/examples/pointcloud/rviz/urdf_pointcloud.rviz
new file mode 100644
index 0000000000..294917ad48
--- /dev/null
+++ b/realsense2_camera/examples/pointcloud/rviz/urdf_pointcloud.rviz
@@ -0,0 +1,363 @@
+Panels:
+ - Class: rviz_common/Displays
+ Help Height: 78
+ Name: Displays
+ Property Tree Widget:
+ Expanded:
+ - /Global Options1
+ - /Status1
+ - /RobotModel1
+ - /RobotModel1/Description Topic1
+ - /TF1/Frames1
+ - /PointCloud21
+ - /Image1
+ - /Image2
+ Splitter Ratio: 0.36195287108421326
+ Tree Height: 308
+ - Class: rviz_common/Selection
+ Name: Selection
+ - Class: rviz_common/Tool Properties
+ Expanded:
+ - /2D Pose Estimate1
+ - /Publish Point1
+ Name: Tool Properties
+ Splitter Ratio: 0.5886790156364441
+ - Class: rviz_common/Views
+ Expanded:
+ - /Current View1
+ Name: Views
+ Splitter Ratio: 0.5
+Visualization Manager:
+ Class: ""
+ Displays:
+ - Alpha: 0.5
+ Cell Size: 1
+ Class: rviz_default_plugins/Grid
+ Color: 160; 160; 164
+ Enabled: true
+ Line Style:
+ Line Width: 0.029999999329447746
+ Value: Lines
+ Name: Grid
+ Normal Cell Count: 0
+ Offset:
+ X: 0
+ Y: 0
+ Z: 0
+ Plane: XY
+ Plane Cell Count: 10
+ Reference Frame:
+ Value: true
+ - Alpha: 1
+ Class: rviz_default_plugins/RobotModel
+ Collision Enabled: false
+ Description File: ""
+ Description Source: Topic
+ Description Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /robot_description
+ Enabled: true
+ Links:
+ All Links Enabled: true
+ Expand Joint Details: false
+ Expand Link Details: false
+ Expand Tree: false
+ Link Tree Style: Links in Alphabetic Order
+ base_link:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_accel_frame:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_accel_optical_frame:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_bottom_screw_frame:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_color_frame:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_color_optical_frame:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_depth_frame:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_depth_optical_frame:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_gyro_frame:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_gyro_optical_frame:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_imu_optical_frame:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_infra1_frame:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_infra1_optical_frame:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_infra2_frame:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_infra2_optical_frame:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ camera_link:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ Value: true
+ camera_usb_plug_link:
+ Alpha: 1
+ Show Axes: false
+ Show Trail: false
+ Value: true
+ Mass Properties:
+ Inertia: false
+ Mass: false
+ Name: RobotModel
+ TF Prefix: ""
+ Update Interval: 0
+ Value: true
+ Visual Enabled: true
+ - Class: rviz_default_plugins/TF
+ Enabled: true
+ Frame Timeout: 15
+ Frames:
+ All Enabled: true
+ base_link:
+ Value: true
+ camera_accel_frame:
+ Value: true
+ camera_accel_optical_frame:
+ Value: true
+ camera_bottom_screw_frame:
+ Value: true
+ camera_color_frame:
+ Value: true
+ camera_color_optical_frame:
+ Value: true
+ camera_depth_frame:
+ Value: true
+ camera_depth_optical_frame:
+ Value: true
+ camera_gyro_frame:
+ Value: true
+ camera_gyro_optical_frame:
+ Value: true
+ camera_imu_optical_frame:
+ Value: true
+ camera_infra1_frame:
+ Value: true
+ camera_infra1_optical_frame:
+ Value: true
+ camera_infra2_frame:
+ Value: true
+ camera_infra2_optical_frame:
+ Value: true
+ camera_link:
+ Value: true
+ camera_usb_plug_link:
+ Value: true
+ Marker Scale: 0.10000000149011612
+ Name: TF
+ Show Arrows: true
+ Show Axes: true
+ Show Names: false
+ Tree:
+ base_link:
+ camera_bottom_screw_frame:
+ camera_link:
+ camera_accel_frame:
+ camera_accel_optical_frame:
+ {}
+ camera_color_frame:
+ camera_color_optical_frame:
+ {}
+ camera_depth_frame:
+ camera_depth_optical_frame:
+ {}
+ camera_gyro_frame:
+ camera_gyro_optical_frame:
+ camera_imu_optical_frame:
+ {}
+ camera_infra1_frame:
+ camera_infra1_optical_frame:
+ {}
+ camera_infra2_frame:
+ camera_infra2_optical_frame:
+ {}
+ camera_usb_plug_link:
+ {}
+ Update Interval: 0
+ Value: true
+ - Alpha: 1
+ Autocompute Intensity Bounds: true
+ Autocompute Value Bounds:
+ Max Value: 10
+ Min Value: -10
+ Value: true
+ Axis: Z
+ Channel Name: intensity
+ Class: rviz_default_plugins/PointCloud2
+ Color: 255; 255; 255
+ Color Transformer: RGB8
+ Decay Time: 0
+ Enabled: true
+ Invert Rainbow: false
+ Max Color: 255; 255; 255
+ Max Intensity: 4096
+ Min Color: 0; 0; 0
+ Min Intensity: 0
+ Name: PointCloud2
+ Position Transformer: XYZ
+ Selectable: true
+ Size (Pixels): 3
+ Size (m): 0.009999999776482582
+ Style: Flat Squares
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ Filter size: 10
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/depth/color/points
+ Use Fixed Frame: true
+ Use rainbow: true
+ Value: true
+ - Class: rviz_default_plugins/Image
+ Enabled: true
+ Max Value: 1
+ Median window: 5
+ Min Value: 0
+ Name: Image
+ Normalize Range: true
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/color/image_raw
+ Value: true
+ - Class: rviz_default_plugins/Image
+ Enabled: true
+ Max Value: 1
+ Median window: 5
+ Min Value: 0
+ Name: Image
+ Normalize Range: true
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/depth/image_rect_raw
+ Value: true
+ Enabled: true
+ Global Options:
+ Background Color: 48; 48; 48
+ Fixed Frame: base_link
+ Frame Rate: 30
+ Name: root
+ Tools:
+ - Class: rviz_default_plugins/Interact
+ Hide Inactive Objects: true
+ - Class: rviz_default_plugins/MoveCamera
+ - Class: rviz_default_plugins/Select
+ - Class: rviz_default_plugins/FocusCamera
+ - Class: rviz_default_plugins/Measure
+ Line color: 128; 128; 0
+ - Class: rviz_default_plugins/SetInitialPose
+ Covariance x: 0.25
+ Covariance y: 0.25
+ Covariance yaw: 0.06853891909122467
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /initialpose
+ - Class: rviz_default_plugins/SetGoal
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /goal_pose
+ - Class: rviz_default_plugins/PublishPoint
+ Single click: true
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /clicked_point
+ Transformation:
+ Current:
+ Class: rviz_default_plugins/TF
+ Value: true
+ Views:
+ Current:
+ Class: rviz_default_plugins/Orbit
+ Distance: 4.639365196228027
+ Enable Stereo Rendering:
+ Stereo Eye Separation: 0.05999999865889549
+ Stereo Focal Distance: 1
+ Swap Stereo Eyes: false
+ Value: false
+ Focal Point:
+ X: 0.0008243769989348948
+ Y: -0.00015415334200952202
+ Z: 0.0003343875869177282
+ Focal Shape Fixed Size: true
+ Focal Shape Size: 0.05000000074505806
+ Invert Z Axis: false
+ Name: Current View
+ Near Clip Distance: 0.009999999776482582
+ Pitch: 1.2247962951660156
+ Target Frame:
+ Value: Orbit (rviz)
+ Yaw: 3.343583583831787
+ Saved: ~
+Window Geometry:
+ Displays:
+ collapsed: false
+ Height: 1016
+ Hide Left Dock: false
+ Hide Right Dock: false
+ Image:
+ collapsed: false
+ QMainWindow State: 000000ff00000000fd0000000400000000000002540000039efc020000000cfb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d000001bf000000c900fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000a0049006d0061006700650100000132000000b00000000000000000fb0000000a0049006d00610067006501000001e8000000ed0000000000000000fb0000000a0049006d0061006700650100000202000000e90000002800fffffffb0000000a0049006d00610067006501000002f1000000ea0000002800ffffff000000010000010f0000039efc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073010000003d0000039e000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b200000000000000000000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000004420000003efc0100000002fb0000000800540069006d00650100000000000004420000000000000000fb0000000800540069006d00650100000000000004500000000000000000000003c70000039e00000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
+ Selection:
+ collapsed: false
+ Tool Properties:
+ collapsed: false
+ Views:
+ collapsed: false
+ Width: 1846
+ X: 74
+ Y: 27
diff --git a/realsense2_camera/include/base_realsense_node.h b/realsense2_camera/include/base_realsense_node.h
index 32a29fe4ad..dd38c5a0e4 100755
--- a/realsense2_camera/include/base_realsense_node.h
+++ b/realsense2_camera/include/base_realsense_node.h
@@ -39,8 +39,6 @@
#include
#include
#include
-#include
-#include
#include
#include
@@ -251,14 +249,12 @@ namespace realsense2_camera
void FillImuData_LinearInterpolation(const CimuData imu_data, std::deque& imu_msgs);
void imu_callback(rs2::frame frame);
void imu_callback_sync(rs2::frame frame, imu_sync_method sync_method=imu_sync_method::COPY);
- void pose_callback(rs2::frame frame);
void multiple_message_callback(rs2::frame frame, imu_sync_method sync_method);
void frame_callback(rs2::frame frame);
void startDiagnosticsUpdater();
void monitoringProfileChanges();
void publish_temperature();
- void setupFiltersPublishers();
void setAvailableSensors();
void setCallbackFunctions();
void updateSensors();
@@ -297,7 +293,6 @@ namespace realsense2_camera
std::map> _image_publishers;
rclcpp::Publisher::SharedPtr _labeled_pointcloud_publisher;
std::map::SharedPtr> _imu_publishers;
- std::shared_ptr> _odom_publisher;
std::shared_ptr _synced_imu_publisher;
std::map::SharedPtr> _info_publishers;
std::map::SharedPtr> _metadata_publishers;
@@ -317,8 +312,9 @@ namespace realsense2_camera
bool _enable_rgbd;
bool _is_color_enabled;
bool _is_depth_enabled;
+ bool _is_accel_enabled;
+ bool _is_gyro_enabled;
bool _pointcloud;
- bool _publish_odom_tf;
imu_sync_method _imu_sync_method;
stream_index_pair _pointcloud_texture;
PipelineSyncer _syncer;
diff --git a/realsense2_camera/include/constants.h b/realsense2_camera/include/constants.h
index 567f8d9537..a3c39e1844 100644
--- a/realsense2_camera/include/constants.h
+++ b/realsense2_camera/include/constants.h
@@ -31,19 +31,7 @@
#define ROS_WARN(...) RCLCPP_WARN(_logger, __VA_ARGS__)
#define ROS_ERROR(...) RCLCPP_ERROR(_logger, __VA_ARGS__)
-#ifdef DASHING
-// Based on: https://docs.ros2.org/dashing/api/rclcpp/logging_8hpp.html
-#define MSG(msg) (static_cast(std::ostringstream() << msg)).str()
-#define ROS_DEBUG_STREAM(msg) RCLCPP_DEBUG(_logger, MSG(msg))
-#define ROS_INFO_STREAM(msg) RCLCPP_INFO(_logger, MSG(msg))
-#define ROS_WARN_STREAM(msg) RCLCPP_WARN(_logger, MSG(msg))
-#define ROS_ERROR_STREAM(msg) RCLCPP_ERROR(_logger, MSG(msg))
-#define ROS_FATAL_STREAM(msg) RCLCPP_FATAL(_logger, MSG(msg))
-#define ROS_DEBUG_STREAM_ONCE(msg) RCLCPP_DEBUG_ONCE(_logger, MSG(msg))
-#define ROS_INFO_STREAM_ONCE(msg) RCLCPP_INFO_ONCE(_logger, MSG(msg))
-#define ROS_WARN_STREAM_COND(cond, msg) RCLCPP_WARN_EXPRESSION(_logger, cond, MSG(msg))
-#else
-// Based on: https://docs.ros2.org/foxy/api/rclcpp/logging_8hpp.html
+// Based on: https://docs.ros2.org/latest/api/rclcpp/logging_8hpp.html
#define ROS_DEBUG_STREAM(msg) RCLCPP_DEBUG_STREAM(_logger, msg)
#define ROS_INFO_STREAM(msg) RCLCPP_INFO_STREAM(_logger, msg)
#define ROS_WARN_STREAM(msg) RCLCPP_WARN_STREAM(_logger, msg)
@@ -52,15 +40,12 @@
#define ROS_DEBUG_STREAM_ONCE(msg) RCLCPP_DEBUG_STREAM_ONCE(_logger, msg)
#define ROS_INFO_STREAM_ONCE(msg) RCLCPP_INFO_STREAM_ONCE(_logger, msg)
#define ROS_WARN_STREAM_COND(cond, msg) RCLCPP_WARN_STREAM_EXPRESSION(_logger, cond, msg)
-#endif
#define ROS_WARN_ONCE(msg) RCLCPP_WARN_ONCE(_logger, msg)
#define ROS_WARN_COND(cond, ...) RCLCPP_WARN_EXPRESSION(_logger, cond, __VA_ARGS__)
namespace realsense2_camera
{
- const uint16_t SR300_PID = 0x0aa5; // SR300
- const uint16_t SR300v2_PID = 0x0B48; // SR305
const uint16_t RS400_PID = 0x0ad1; // PSR
const uint16_t RS410_PID = 0x0ad2; // ASR
const uint16_t RS415_PID = 0x0ad3; // ASRC
@@ -83,10 +68,6 @@ namespace realsense2_camera
const uint16_t RS457_PID = 0xABCD; // D457
const uint16_t RS_D585_PID = 0x0B6A; // D585, D for depth
const uint16_t RS_D585S_PID = 0x0B6B; // D585S, S for safety
- const uint16_t RS_L515_PID_PRE_PRQ = 0x0B3D; //
- const uint16_t RS_L515_PID = 0x0B64; //
- const uint16_t RS_L535_PID = 0x0b68;
-
const bool ALLOW_NO_TEXTURE_POINTS = false;
const bool ORDERED_PC = false;
@@ -102,15 +83,10 @@ namespace realsense2_camera
const std::string HID_QOS = "SENSOR_DATA";
const bool HOLD_BACK_IMU_FOR_FRAMES = false;
- const bool PUBLISH_ODOM_TF = true;
const std::string DEFAULT_BASE_FRAME_ID = "link";
- const std::string DEFAULT_ODOM_FRAME_ID = "odom_frame";
const std::string DEFAULT_IMU_OPTICAL_FRAME_ID = "camera_imu_optical_frame";
- const std::string DEFAULT_UNITE_IMU_METHOD = "";
- const std::string DEFAULT_FILTERS = "";
-
const float ROS_DEPTH_SCALE = 0.001;
static const rmw_qos_profile_t rmw_qos_profile_latched =
diff --git a/realsense2_camera/include/image_publisher.h b/realsense2_camera/include/image_publisher.h
index 6bc0bab8e6..3d7d004c74 100644
--- a/realsense2_camera/include/image_publisher.h
+++ b/realsense2_camera/include/image_publisher.h
@@ -17,11 +17,7 @@
#include
#include
-#if defined( DASHING ) || defined( ELOQUENT )
-#include
-#else
#include
-#endif
namespace realsense2_camera {
class image_publisher
diff --git a/realsense2_camera/include/profile_manager.h b/realsense2_camera/include/profile_manager.h
index b753ff67a9..3021e95f6c 100644
--- a/realsense2_camera/include/profile_manager.h
+++ b/realsense2_camera/include/profile_manager.h
@@ -103,12 +103,4 @@ namespace realsense2_camera
protected:
std::map > _fps;
};
-
- class PoseProfilesManager : public MotionProfilesManager
- {
- public:
- using MotionProfilesManager::MotionProfilesManager;
- void registerProfileParameters(std::vector all_profiles, std::function update_sensor_func) override;
- };
-
}
diff --git a/realsense2_camera/include/ros_sensor.h b/realsense2_camera/include/ros_sensor.h
index c4430bfd4f..57d224300e 100644
--- a/realsense2_camera/include/ros_sensor.h
+++ b/realsense2_camera/include/ros_sensor.h
@@ -105,6 +105,7 @@ namespace realsense2_camera
void set_sensor_auto_exposure_roi();
void registerAutoExposureROIOptions();
void UpdateSequenceIdCallback();
+ template
void set_sensor_parameter_to_ros(rs2_option option);
private:
diff --git a/realsense2_camera/include/ros_utils.h b/realsense2_camera/include/ros_utils.h
index 5f458af4c5..67c10a79d7 100644
--- a/realsense2_camera/include/ros_utils.h
+++ b/realsense2_camera/include/ros_utils.h
@@ -31,12 +31,8 @@ namespace realsense2_camera
const stream_index_pair INFRA0{RS2_STREAM_INFRARED, 0};
const stream_index_pair INFRA1{RS2_STREAM_INFRARED, 1};
const stream_index_pair INFRA2{RS2_STREAM_INFRARED, 2};
- const stream_index_pair FISHEYE{RS2_STREAM_FISHEYE, 0};
- const stream_index_pair FISHEYE1{RS2_STREAM_FISHEYE, 1};
- const stream_index_pair FISHEYE2{RS2_STREAM_FISHEYE, 2};
const stream_index_pair GYRO{RS2_STREAM_GYRO, 0};
const stream_index_pair ACCEL{RS2_STREAM_ACCEL, 0};
- const stream_index_pair POSE{RS2_STREAM_POSE, 0};
const stream_index_pair SAFETY{RS2_STREAM_SAFETY, 0};
const stream_index_pair LABELED_POINT_CLOUD{RS2_STREAM_LABELED_POINT_CLOUD, 0};
const stream_index_pair OCCUPANCY{RS2_STREAM_OCCUPANCY, 0};
diff --git a/realsense2_camera/launch/default.rviz b/realsense2_camera/launch/default.rviz
index 055431f228..c4374772c6 100644
--- a/realsense2_camera/launch/default.rviz
+++ b/realsense2_camera/launch/default.rviz
@@ -9,13 +9,15 @@ Panels:
- /Grid1
- /PointCloud21
- /Image1
+ - /Image2
+ - /Image3
+ - /Image4
Splitter Ratio: 0.5
- Tree Height: 222
+ Tree Height: 190
- Class: rviz_common/Selection
Name: Selection
- Class: rviz_common/Tool Properties
Expanded:
- - /2D Nav Goal1
- /Publish Point1
Name: Tool Properties
Splitter Ratio: 0.5886790156364441
@@ -65,13 +67,17 @@ Visualization Manager:
Min Intensity: 0
Name: PointCloud2
Position Transformer: XYZ
- Queue Size: 10
Selectable: true
Size (Pixels): 3
Size (m): 0.009999999776482582
Style: Flat Squares
- Topic: /camera/depth/color/points
- Unreliable: false
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ Filter size: 10
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/depth/color/points
Use Fixed Frame: true
Use rainbow: true
Value: true
@@ -82,9 +88,12 @@ Visualization Manager:
Min Value: 0
Name: Image
Normalize Range: true
- Queue Size: 10
- Topic: /camera/color/image_raw
- Unreliable: false
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/color/image_raw
Value: true
- Class: rviz_default_plugins/Image
Enabled: true
@@ -93,9 +102,12 @@ Visualization Manager:
Min Value: 0
Name: Image
Normalize Range: true
- Queue Size: 10
- Topic: /camera/depth/image_rect_raw
- Unreliable: false
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/depth/image_rect_raw
Value: true
- Class: rviz_default_plugins/Image
Enabled: true
@@ -104,9 +116,12 @@ Visualization Manager:
Min Value: 0
Name: Image
Normalize Range: true
- Queue Size: 10
- Topic: /camera/infra1/image_rect_raw
- Unreliable: false
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/infra1/image_rect_raw
Value: true
- Class: rviz_default_plugins/Image
Enabled: true
@@ -115,9 +130,12 @@ Visualization Manager:
Min Value: 0
Name: Image
Normalize Range: true
- Queue Size: 10
- Topic: /camera/infra2/image_rect_raw
- Unreliable: false
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /camera/camera/infra2/image_rect_raw
Value: true
Enabled: true
Global Options:
@@ -132,12 +150,30 @@ Visualization Manager:
- Class: rviz_default_plugins/Measure
Line color: 128; 128; 0
- Class: rviz_default_plugins/SetInitialPose
- Topic: /initialpose
+ Covariance x: 0.25
+ Covariance y: 0.25
+ Covariance yaw: 0.06853891909122467
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /initialpose
- Class: rviz_default_plugins/SetGoal
- Topic: /move_base_simple/goal
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /move_base_simple/goal
- Class: rviz_default_plugins/PublishPoint
Single click: true
- Topic: /clicked_point
+ Topic:
+ Depth: 5
+ Durability Policy: Volatile
+ History Policy: Keep Last
+ Reliability Policy: Reliable
+ Value: /clicked_point
Transformation:
Current:
Class: rviz_default_plugins/TF
@@ -145,7 +181,7 @@ Visualization Manager:
Views:
Current:
Class: rviz_default_plugins/Orbit
- Distance: 1.0510121583938599
+ Distance: 3.677529811859131
Enable Stereo Rendering:
Stereo Eye Separation: 0.05999999865889549
Stereo Focal Distance: 1
@@ -168,18 +204,18 @@ Visualization Manager:
Window Geometry:
Displays:
collapsed: false
- Height: 1025
+ Height: 1016
Hide Left Dock: false
Hide Right Dock: true
Image:
collapsed: false
- QMainWindow State: 000000ff00000000fd000000040000000000000156000003a7fc0200000010fb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d0000011b000000c900fffffffb0000000a0049006d006100670065010000015e0000009b0000002800fffffffb0000000a0049006d00610067006501000001ff000000b20000002800fffffffb0000000a0049006d00610067006501000002b70000009a0000002800fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000a0049006d00610067006501000003570000008d0000002800fffffffb0000000a0049006d00610067006501000001940000005d0000000000000000fb0000000a0049006d00610067006501000001f70000007a0000000000000000fb0000000a0049006d00610067006501000002770000009d0000000000000000fb0000000a0049006d006100670065010000031a000000ca0000000000000000000000010000010f000001effc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073000000003d000001ef000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b20000000000000000000000020000073d000000a9fc0100000002fb0000000a0049006d00610067006503000001c5000000bb000001f8000001b0fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000004420000003efc0100000002fb0000000800540069006d00650100000000000004420000000000000000fb0000000800540069006d00650100000000000004500000000000000000000005e1000003a700000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
+ QMainWindow State: 000000ff00000000fd0000000400000000000001560000039efc0200000010fb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d000000fb000000c900fffffffb0000000a0049006d006100670065010000011e000000780000000000000000fb0000000a0049006d006100670065010000013e000000cd0000002800fffffffb0000000a0049006d0061006700650100000211000000b10000002800fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000a0049006d00610067006501000002c8000000a20000002800fffffffb0000000a0049006d00610067006501000003700000006b0000002800fffffffb0000000a0049006d00610067006501000001f70000007a0000000000000000fb0000000a0049006d00610067006501000002770000009d0000000000000000fb0000000a0049006d006100670065010000031a000000ca0000000000000000000000010000010f000001effc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073000000003d000001ef000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b20000000000000000000000020000073d000000a9fc0100000002fb0000000a0049006d00610067006503000001c5000000bb000001f8000001b0fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000004420000003efc0100000002fb0000000800540069006d00650100000000000004420000000000000000fb0000000800540069006d00650100000000000004500000000000000000000005da0000039e00000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
Selection:
collapsed: false
Tool Properties:
collapsed: false
Views:
collapsed: true
- Width: 1853
- X: 67
+ Width: 1846
+ X: 74
Y: 27
diff --git a/realsense2_camera/launch/rs_intra_process_demo_launch.py b/realsense2_camera/launch/rs_intra_process_demo_launch.py
index 103afee220..a4e54391a0 100644
--- a/realsense2_camera/launch/rs_intra_process_demo_launch.py
+++ b/realsense2_camera/launch/rs_intra_process_demo_launch.py
@@ -41,22 +41,32 @@
'\nplease make sure you run "colcon build --cmake-args \'-DBUILD_TOOLS=ON\'" command before running this launch file')
-configurable_parameters = [{'name': 'camera_name', 'default': 'camera', 'description': 'camera unique name'},
- {'name': 'serial_no', 'default': "''", 'description': 'choose device by serial number'},
- {'name': 'usb_port_id', 'default': "''", 'description': 'choose device by usb port id'},
- {'name': 'device_type', 'default': "''", 'description': 'choose device by type'},
- {'name': 'log_level', 'default': 'info', 'description': 'debug log level [DEBUG|INFO|WARN|ERROR|FATAL]'},
- {'name': 'rgb_camera.profile', 'default': '0,0,0', 'description': 'color image width'},
- {'name': 'enable_color', 'default': 'true', 'description': 'enable color stream'},
- {'name': 'enable_depth', 'default': 'false', 'description': 'enable depth stream'},
- {'name': 'enable_infra1', 'default': 'false', 'description': 'enable infra1 stream'},
- {'name': 'enable_infra2', 'default': 'false', 'description': 'enable infra2 stream'},
- {'name': 'enable_gyro', 'default': 'false', 'description': "enable gyro stream"},
- {'name': 'enable_accel', 'default': 'false', 'description': "enable accel stream"},
- {'name': 'intra_process_comms', 'default': 'true', 'description': "enable intra-process communication"},
- {'name': 'publish_tf', 'default': 'true', 'description': '[bool] enable/disable publishing static & dynamic TF'},
- {'name': 'tf_publish_rate', 'default': '0.0', 'description': '[double] rate in HZ for publishing dynamic TF'},
- ]
+realsense_node_params = [{'name': 'serial_no', 'default': "''", 'description': 'choose device by serial number'},
+ {'name': 'usb_port_id', 'default': "''", 'description': 'choose device by usb port id'},
+ {'name': 'device_type', 'default': "''", 'description': 'choose device by type'},
+ {'name': 'log_level', 'default': 'info', 'description': 'debug log level [DEBUG|INFO|WARN|ERROR|FATAL]'},
+ {'name': 'rgb_camera.profile', 'default': '0,0,0', 'description': 'color image width'},
+ {'name': 'enable_color', 'default': 'true', 'description': 'enable color stream'},
+ {'name': 'enable_depth', 'default': 'true', 'description': 'enable depth stream'},
+ {'name': 'enable_infra', 'default': 'false', 'description': 'enable infra stream'},
+ {'name': 'enable_infra1', 'default': 'true', 'description': 'enable infra1 stream'},
+ {'name': 'enable_infra2', 'default': 'true', 'description': 'enable infra2 stream'},
+ {'name': 'enable_gyro', 'default': 'true', 'description': "enable gyro stream"},
+ {'name': 'enable_accel', 'default': 'true', 'description': "enable accel stream"},
+ {'name': 'unite_imu_method', 'default': "1", 'description': '[0-None, 1-copy, 2-linear_interpolation]'},
+ {'name': 'intra_process_comms', 'default': 'true', 'description': "enable intra-process communication"},
+ {'name': 'enable_sync', 'default': 'true', 'description': "'enable sync mode'"},
+ {'name': 'pointcloud.enable', 'default': 'true', 'description': ''},
+ {'name': 'enable_rgbd', 'default': 'true', 'description': "'enable rgbd topic'"},
+ {'name': 'align_depth.enable', 'default': 'true', 'description': "'enable align depth filter'"},
+ {'name': 'publish_tf', 'default': 'true', 'description': '[bool] enable/disable publishing static & dynamic TF'},
+ {'name': 'tf_publish_rate', 'default': '1.0', 'description': '[double] rate in HZ for publishing dynamic TF'},
+ ]
+
+frame_latency_node_params = [{'name': 'topic_name', 'default': '/camera/color/image_raw', 'description': 'topic to which latency calculated'},
+ {'name': 'topic_type', 'default': 'image', 'description': 'topic type [image|points|imu|metadata|camera_info|rgbd|imu_info|tf]'},
+ ]
+
def declare_configurable_parameters(parameters):
return [DeclareLaunchArgument(param['name'], default_value=param['default'], description=param['description']) for param in parameters]
@@ -68,7 +78,8 @@ def set_configurable_parameters(parameters):
def generate_launch_description():
- return LaunchDescription(declare_configurable_parameters(configurable_parameters) + [
+ return LaunchDescription(declare_configurable_parameters(realsense_node_params) +
+ declare_configurable_parameters(frame_latency_node_params) +[
ComposableNodeContainer(
name='my_container',
namespace='',
@@ -80,14 +91,14 @@ def generate_launch_description():
namespace='',
plugin='realsense2_camera::' + rs_node_class,
name="camera",
- parameters=[set_configurable_parameters(configurable_parameters)],
+ parameters=[set_configurable_parameters(realsense_node_params)],
extra_arguments=[{'use_intra_process_comms': LaunchConfiguration("intra_process_comms")}]) ,
ComposableNode(
package='realsense2_camera',
namespace='',
plugin='rs2_ros::tools::frame_latency::' + rs_latency_tool_class,
name='frame_latency',
- parameters=[set_configurable_parameters(configurable_parameters)],
+ parameters=[set_configurable_parameters(frame_latency_node_params)],
extra_arguments=[{'use_intra_process_comms': LaunchConfiguration("intra_process_comms")}]) ,
],
output='screen',
diff --git a/realsense2_camera/launch/rs_launch.py b/realsense2_camera/launch/rs_launch.py
index a245768f0e..9b889d8561 100644
--- a/realsense2_camera/launch/rs_launch.py
+++ b/realsense2_camera/launch/rs_launch.py
@@ -27,63 +27,64 @@
{'name': 'usb_port_id', 'default': "''", 'description': 'choose device by usb port id'},
{'name': 'device_type', 'default': "''", 'description': 'choose device by type'},
{'name': 'config_file', 'default': "''", 'description': 'yaml config file'},
- {'name': 'unite_imu_method', 'default': "0", 'description': '[0-None, 1-copy, 2-linear_interpolation]'},
{'name': 'json_file_path', 'default': "''", 'description': 'allows advanced configuration'},
+ {'name': 'initial_reset', 'default': 'false', 'description': "''"},
+ {'name': 'rosbag_filename', 'default': "''", 'description': 'A realsense bagfile to run from as a device'},
{'name': 'log_level', 'default': 'info', 'description': 'debug log level [DEBUG|INFO|WARN|ERROR|FATAL]'},
{'name': 'output', 'default': 'screen', 'description': 'pipe node output [screen|log]'},
- {'name': 'depth_module.profile', 'default': '0,0,0', 'description': 'depth module profile'},
- {'name': 'depth_module.depth_format', 'default': 'Z16', 'description': 'depth stream format'},
- {'name': 'depth_module.infra_format', 'default': 'RGB8', 'description': 'infra0 stream format'},
- {'name': 'depth_module.infra1_format', 'default': 'Y8', 'description': 'infra1 stream format'},
- {'name': 'depth_module.infra2_format', 'default': 'Y8', 'description': 'infra2 stream format'},
- {'name': 'enable_depth', 'default': 'true', 'description': 'enable depth stream'},
+ {'name': 'enable_color', 'default': 'true', 'description': 'enable color stream'},
{'name': 'rgb_camera.profile', 'default': '0,0,0', 'description': 'color image width'},
{'name': 'rgb_camera.color_format', 'default': 'RGB8', 'description': 'color stream format'},
{'name': 'rgb_camera.enable_auto_exposure', 'default': 'true', 'description': 'enable/disable auto exposure for color image'},
- {'name': 'enable_color', 'default': 'true', 'description': 'enable color stream'},
+ {'name': 'enable_depth', 'default': 'true', 'description': 'enable depth stream'},
{'name': 'enable_infra', 'default': 'false', 'description': 'enable infra0 stream'},
{'name': 'enable_infra1', 'default': 'false', 'description': 'enable infra1 stream'},
{'name': 'enable_infra2', 'default': 'false', 'description': 'enable infra2 stream'},
- {'name': 'enable_fisheye1', 'default': 'true', 'description': 'enable fisheye1 stream'},
- {'name': 'enable_fisheye2', 'default': 'true', 'description': 'enable fisheye2 stream'},
- {'name': 'enable_confidence', 'default': 'true', 'description': 'enable confidence'},
- {'name': 'gyro_fps', 'default': '0', 'description': "''"},
- {'name': 'accel_fps', 'default': '0', 'description': "''"},
- {'name': 'enable_gyro', 'default': 'false', 'description': "'enable gyro stream'"},
- {'name': 'enable_accel', 'default': 'false', 'description': "'enable accel stream'"},
- {'name': 'enable_safety', 'default': 'false', 'description': "'enable safety stream'"},
- {'name': 'enable_labeled_point_cloud', 'default': 'false', 'description': "'enable labeled point cloud stream'"},
- {'name': 'enable_occupancy', 'default': 'false', 'description': "'enable occupancy stream'"},
- {'name': 'depth_mapping_camera.profile', 'default': '0,0,0', 'description': "'depth mapping camera profile'"},
- {'name': 'enable_pose', 'default': 'true', 'description': "'enable pose stream'"},
- {'name': 'pose_fps', 'default': '200', 'description': "''"},
- {'name': 'pointcloud.enable', 'default': 'false', 'description': ''},
- {'name': 'pointcloud.stream_filter', 'default': '2', 'description': 'texture stream for pointcloud'},
- {'name': 'pointcloud.stream_index_filter','default': '0', 'description': 'texture stream index for pointcloud'},
- {'name': 'pointcloud.ordered_pc', 'default': 'false', 'description': ''},
+ {'name': 'depth_module.profile', 'default': '0,0,0', 'description': 'depth module profile'},
+ {'name': 'depth_module.depth_format', 'default': 'Z16', 'description': 'depth stream format'},
+ {'name': 'depth_module.infra_format', 'default': 'RGB8', 'description': 'infra0 stream format'},
+ {'name': 'depth_module.infra1_format', 'default': 'Y8', 'description': 'infra1 stream format'},
+ {'name': 'depth_module.infra2_format', 'default': 'Y8', 'description': 'infra2 stream format'},
+ {'name': 'depth_module.exposure', 'default': '8500', 'description': 'Depth module manual exposure value'},
+ {'name': 'depth_module.gain', 'default': '16', 'description': 'Depth module manual gain value'},
+ {'name': 'depth_module.hdr_enabled', 'default': 'false', 'description': 'Depth module hdr enablement flag. Used for hdr_merge filter'},
+ {'name': 'depth_module.enable_auto_exposure', 'default': 'true', 'description': 'enable/disable auto exposure for depth image'},
+ {'name': 'depth_module.exposure.1', 'default': '7500', 'description': 'Depth module first exposure value. Used for hdr_merge filter'},
+ {'name': 'depth_module.gain.1', 'default': '16', 'description': 'Depth module first gain value. Used for hdr_merge filter'},
+ {'name': 'depth_module.exposure.2', 'default': '1', 'description': 'Depth module second exposure value. Used for hdr_merge filter'},
+ {'name': 'depth_module.gain.2', 'default': '16', 'description': 'Depth module second gain value. Used for hdr_merge filter'},
{'name': 'enable_sync', 'default': 'false', 'description': "'enable sync mode'"},
{'name': 'enable_rgbd', 'default': 'false', 'description': "'enable rgbd topic'"},
- {'name': 'align_depth.enable', 'default': 'false', 'description': "'enable align depth filter'"},
- {'name': 'colorizer.enable', 'default': 'false', 'description': "''"},
+ {'name': 'enable_gyro', 'default': 'false', 'description': "'enable gyro stream'"},
+ {'name': 'enable_accel', 'default': 'false', 'description': "'enable accel stream'"},
+ {'name': 'gyro_fps', 'default': '0', 'description': "''"},
+ {'name': 'accel_fps', 'default': '0', 'description': "''"},
+ {'name': 'unite_imu_method', 'default': "0", 'description': '[0-None, 1-copy, 2-linear_interpolation]'},
{'name': 'clip_distance', 'default': '-2.', 'description': "''"},
+ {'name': 'angular_velocity_cov', 'default': '0.01', 'description': "''"},
{'name': 'linear_accel_cov', 'default': '0.01', 'description': "''"},
- {'name': 'initial_reset', 'default': 'false', 'description': "''"},
- {'name': 'allow_no_texture_points', 'default': 'false', 'description': "''"},
+ {'name': 'diagnostics_period', 'default': '0.0', 'description': 'Rate of publishing diagnostics. 0=Disabled'},
{'name': 'publish_tf', 'default': 'true', 'description': '[bool] enable/disable publishing static & dynamic TF'},
{'name': 'tf_publish_rate', 'default': '0.0', 'description': '[double] rate in Hz for publishing dynamic TF'},
- {'name': 'diagnostics_period', 'default': '0.0', 'description': 'Rate of publishing diagnostics. 0=Disabled'},
- {'name': 'decimation_filter.enable', 'default': 'false', 'description': 'Rate of publishing static_tf'},
- {'name': 'rosbag_filename', 'default': "''", 'description': 'A realsense bagfile to run from as a device'},
- {'name': 'depth_module.exposure.1', 'default': '7500', 'description': 'Depth module first exposure value. Used for hdr_merge filter'},
- {'name': 'depth_module.gain.1', 'default': '16', 'description': 'Depth module first gain value. Used for hdr_merge filter'},
- {'name': 'depth_module.exposure.2', 'default': '1', 'description': 'Depth module second exposure value. Used for hdr_merge filter'},
- {'name': 'depth_module.gain.2', 'default': '16', 'description': 'Depth module second gain value. Used for hdr_merge filter'},
- {'name': 'depth_module.exposure', 'default': '8500', 'description': 'Depth module manual exposure value'},
- {'name': 'depth_module.hdr_enabled', 'default': 'false', 'description': 'Depth module hdr enablement flag. Used for hdr_merge filter'},
- {'name': 'depth_module.enable_auto_exposure', 'default': 'true', 'description': 'enable/disable auto exposure for depth image'},
+ {'name': 'pointcloud.enable', 'default': 'false', 'description': ''},
+ {'name': 'pointcloud.stream_filter', 'default': '2', 'description': 'texture stream for pointcloud'},
+ {'name': 'pointcloud.stream_index_filter','default': '0', 'description': 'texture stream index for pointcloud'},
+ {'name': 'pointcloud.ordered_pc', 'default': 'false', 'description': ''},
+ {'name': 'pointcloud.allow_no_texture_points', 'default': 'false', 'description': "''"},
+ {'name': 'align_depth.enable', 'default': 'false', 'description': 'enable align depth filter'},
+ {'name': 'colorizer.enable', 'default': 'false', 'description': 'enable colorizer filter'},
+ {'name': 'decimation_filter.enable', 'default': 'false', 'description': 'enable_decimation_filter'},
+ {'name': 'spatial_filter.enable', 'default': 'false', 'description': 'enable_spatial_filter'},
+ {'name': 'temporal_filter.enable', 'default': 'false', 'description': 'enable_temporal_filter'},
+ {'name': 'disparity_filter.enable', 'default': 'false', 'description': 'enable_disparity_filter'},
+ {'name': 'hole_filling_filter.enable', 'default': 'false', 'description': 'enable_hole_filling_filter'},
{'name': 'hdr_merge.enable', 'default': 'false', 'description': 'hdr_merge filter enablement flag'},
{'name': 'wait_for_device_timeout', 'default': '-1.', 'description': 'Timeout for waiting for device to connect (Seconds)'},
{'name': 'reconnect_timeout', 'default': '6.', 'description': 'Timeout(seconds) between consequtive reconnection attempts'},
+ {'name': 'enable_safety', 'default': 'false', 'description': "'enable safety stream'"},
+ {'name': 'enable_labeled_point_cloud', 'default': 'false', 'description': "'enable labeled point cloud stream'"},
+ {'name': 'enable_occupancy', 'default': 'false', 'description': "'enable occupancy stream'"},
+ {'name': 'depth_mapping_camera.profile', 'default': '0,0,0', 'description': "'depth mapping camera profile'"},
]
def declare_configurable_parameters(parameters):
@@ -99,37 +100,18 @@ def yaml_to_dict(path_to_yaml):
def launch_setup(context, params, param_name_suffix=''):
_config_file = LaunchConfiguration('config_file' + param_name_suffix).perform(context)
params_from_file = {} if _config_file == "''" else yaml_to_dict(_config_file)
- # Realsense
- if (os.getenv('ROS_DISTRO') == "dashing") or (os.getenv('ROS_DISTRO') == "eloquent"):
- return [
- launch_ros.actions.Node(
- package='realsense2_camera',
- node_namespace=LaunchConfiguration('camera_namespace' + param_name_suffix),
- node_name=LaunchConfiguration('camera_name' + param_name_suffix),
- node_executable='realsense2_camera_node',
- prefix=['stdbuf -o L'],
- parameters=[params
- , params_from_file
- ],
- output='screen',
- arguments=['--ros-args', '--log-level', LaunchConfiguration('log_level' + param_name_suffix)],
- )
- ]
- else:
- return [
- launch_ros.actions.Node(
- package='realsense2_camera',
- namespace=LaunchConfiguration('camera_namespace' + param_name_suffix),
- name=LaunchConfiguration('camera_name' + param_name_suffix),
- executable='realsense2_camera_node',
- parameters=[params
- , params_from_file
- ],
- output='screen',
- arguments=['--ros-args', '--log-level', LaunchConfiguration('log_level' + param_name_suffix)],
- emulate_tty=True,
- )
- ]
+ return [
+ launch_ros.actions.Node(
+ package='realsense2_camera',
+ namespace=LaunchConfiguration('camera_namespace' + param_name_suffix),
+ name=LaunchConfiguration('camera_name' + param_name_suffix),
+ executable='realsense2_camera_node',
+ parameters=[params, params_from_file],
+ output=LaunchConfiguration('output' + param_name_suffix),
+ arguments=['--ros-args', '--log-level', LaunchConfiguration('log_level' + param_name_suffix)],
+ emulate_tty=True,
+ )
+ ]
def generate_launch_description():
return LaunchDescription(declare_configurable_parameters(configurable_parameters) + [
diff --git a/realsense2_camera/launch/rs_multi_camera_launch.py b/realsense2_camera/launch/rs_multi_camera_launch.py
index 5cb3ea5246..782b802c55 100644
--- a/realsense2_camera/launch/rs_multi_camera_launch.py
+++ b/realsense2_camera/launch/rs_multi_camera_launch.py
@@ -49,7 +49,7 @@ def duplicate_params(general_params, posix):
param['name'] += posix
return local_params
-def add_node_action(context : LaunchContext):
+def launch_static_transform_publisher_node(context : LaunchContext):
# dummy static transformation from camera1 to camera2
node = launch_ros.actions.Node(
package = "tf2_ros",
@@ -74,5 +74,5 @@ def generate_launch_description():
OpaqueFunction(function=rs_launch.launch_setup,
kwargs = {'params' : set_configurable_parameters(params2),
'param_name_suffix': '2'}),
- OpaqueFunction(function=add_node_action)
+ OpaqueFunction(function=launch_static_transform_publisher_node)
])
diff --git a/realsense2_camera/package.xml b/realsense2_camera/package.xml
index f771e51b39..bd99ef40b8 100644
--- a/realsense2_camera/package.xml
+++ b/realsense2_camera/package.xml
@@ -3,7 +3,7 @@
realsense2_camera4.54.1
- RealSense camera package allowing access to Intel SR300 and D400 3D cameras
+ RealSense camera package allowing access to Intel D400 3D camerasLibRealSense ROS TeamApache License 2.0
@@ -37,7 +37,8 @@
sensor_msgs_pypython3-requeststf2_ros_py
-
+ ros2topic
+
launch_rosros_environment
diff --git a/realsense2_camera/scripts/rs2_listener.py b/realsense2_camera/scripts/rs2_listener.py
index 1fcbc97834..9cc356f7bd 100644
--- a/realsense2_camera/scripts/rs2_listener.py
+++ b/realsense2_camera/scripts/rs2_listener.py
@@ -19,18 +19,12 @@
from rclpy import qos
from sensor_msgs.msg import Image as msg_Image
import numpy as np
-import inspect
import ctypes
import struct
import quaternion
-import os
-if (os.getenv('ROS_DISTRO') != "dashing"):
- import tf2_ros
-if (os.getenv('ROS_DISTRO') == "humble"):
- from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
- from sensor_msgs_py import point_cloud2 as pc2
-# from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
-# import sensor_msgs.point_cloud2 as pc2
+import tf2_ros
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+from sensor_msgs_py import point_cloud2 as pc2
from sensor_msgs.msg import Imu as msg_Imu
try:
@@ -220,9 +214,8 @@ def wait_for_messages(self, themes):
node.get_logger().info('Subscribing %s on topic: %s' % (theme_name, theme['topic']))
self.func_data[theme_name]['sub'] = node.create_subscription(theme['msg_type'], theme['topic'], theme['callback'](theme_name), qos.qos_profile_sensor_data)
- if (os.getenv('ROS_DISTRO') != "dashing"):
- self.tfBuffer = tf2_ros.Buffer()
- self.tf_listener = tf2_ros.TransformListener(self.tfBuffer, node)
+ self.tfBuffer = tf2_ros.Buffer()
+ self.tf_listener = tf2_ros.TransformListener(self.tfBuffer, node)
self.prev_time = time.time()
break_timeout = False
diff --git a/realsense2_camera/scripts/rs2_test.py b/realsense2_camera/scripts/rs2_test.py
index 697f72ad5b..b1351aba66 100644
--- a/realsense2_camera/scripts/rs2_test.py
+++ b/realsense2_camera/scripts/rs2_test.py
@@ -21,8 +21,7 @@
from rclpy.node import Node
from importRosbag.importRosbag import importRosbag
import numpy as np
-if (os.getenv('ROS_DISTRO') != "dashing"):
- import tf2_ros
+import tf2_ros
import itertools
import subprocess
import time
@@ -277,20 +276,16 @@ def print_results(results):
def get_tfs(coupled_frame_ids):
res = dict()
- if (os.getenv('ROS_DISTRO') == "dashing"):
- for couple in coupled_frame_ids:
+ tfBuffer = tf2_ros.Buffer()
+ node = Node('tf_listener')
+ listener = tf2_ros.TransformListener(tfBuffer, node)
+ rclpy.spin_once(node)
+ for couple in coupled_frame_ids:
+ from_id, to_id = couple
+ if (tfBuffer.can_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=3e6))):
+ res[couple] = tfBuffer.lookup_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=1e6)).transform
+ else:
res[couple] = None
- else:
- tfBuffer = tf2_ros.Buffer()
- node = Node('tf_listener')
- listener = tf2_ros.TransformListener(tfBuffer, node)
- rclpy.spin_once(node)
- for couple in coupled_frame_ids:
- from_id, to_id = couple
- if (tfBuffer.can_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=3e6))):
- res[couple] = tfBuffer.lookup_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=1e6)).transform
- else:
- res[couple] = None
return res
def kill_realsense2_camera_node():
@@ -338,7 +333,7 @@ def run_tests(tests):
listener_res = msg_retriever.wait_for_messages(themes)
if 'static_tf' in [test['type'] for test in rec_tests]:
print ('Gathering static transforms')
- frame_ids = ['camera_link', 'camera_depth_frame', 'camera_infra1_frame', 'camera_infra2_frame', 'camera_color_frame', 'camera_fisheye_frame', 'camera_pose']
+ frame_ids = ['camera_link', 'camera_depth_frame', 'camera_infra1_frame', 'camera_infra2_frame', 'camera_color_frame']
coupled_frame_ids = [xx for xx in itertools.combinations(frame_ids, 2)]
listener_res['static_tf'] = get_tfs(coupled_frame_ids)
@@ -372,17 +367,12 @@ def main():
#{'name': 'points_cloud_1', 'type': 'pointscloud_avg', 'params': {'rosbag_filename': outdoors_filename, 'pointcloud.enable': 'true'}},
{'name': 'depth_w_cloud_1', 'type': 'depth_avg', 'params': {'rosbag_filename': outdoors_filename, 'pointcloud.enable': 'true'}},
{'name': 'align_depth_color_1', 'type': 'align_depth_color', 'params': {'rosbag_filename': outdoors_filename, 'align_depth.enable':'true'}},
- {'name': 'align_depth_ir1_1', 'type': 'align_depth_ir1', 'params': {'rosbag_filename': outdoors_filename, 'align_depth.enable': 'true',
- 'enable_infra1':'true', 'enable_infra2':'true'}},
+ {'name': 'align_depth_ir1_1', 'type': 'align_depth_ir1', 'params': {'rosbag_filename': outdoors_filename, 'align_depth.enable': 'true', 'enable_infra1':'true', 'enable_infra2':'true'}},
{'name': 'depth_avg_decimation_1', 'type': 'depth_avg_decimation', 'params': {'rosbag_filename': outdoors_filename, 'decimation_filter.enable':'true'}},
{'name': 'align_depth_ir1_decimation_1', 'type': 'align_depth_ir1_decimation', 'params': {'rosbag_filename': outdoors_filename, 'align_depth.enable':'true', 'decimation_filter.enable':'true'}},
- ]
- if (os.getenv('ROS_DISTRO') != "dashing"):
- all_tests.extend([
- {'name': 'static_tf_1', 'type': 'static_tf', 'params': {'rosbag_filename': outdoors_filename,
- 'enable_infra1':'true', 'enable_infra2':'true'}},
- {'name': 'accel_up_1', 'type': 'accel_up', 'params': {'rosbag_filename': './records/D435i_Depth_and_IMU_Stands_still.bag', 'enable_accel': 'true', 'accel_fps': '0.0'}},
- ])
+ {'name': 'static_tf_1', 'type': 'static_tf', 'params': {'rosbag_filename': outdoors_filename, 'enable_infra1':'true', 'enable_infra2':'true'}},
+ {'name': 'accel_up_1', 'type': 'accel_up', 'params': {'rosbag_filename': './records/D435i_Depth_and_IMU_Stands_still.bag', 'enable_accel': 'true', 'accel_fps': '0.0'}},
+ ]
# Normalize parameters:
for test in all_tests:
diff --git a/realsense2_camera/scripts/set_cams_transforms.py b/realsense2_camera/scripts/set_cams_transforms.py
index aa0186cac7..ecf466ac27 100644
--- a/realsense2_camera/scripts/set_cams_transforms.py
+++ b/realsense2_camera/scripts/set_cams_transforms.py
@@ -12,18 +12,18 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-import rospy
+import rclpy
+from rclpy.node import Node
import sys
-import tf
-import tf2_ros
import geometry_msgs.msg
import termios
import tty
import os
-import time
import math
import json
+from tf2_ros.static_transform_broadcaster import StaticTransformBroadcaster
+from tf_transformations import quaternion_from_euler
def getch():
@@ -46,9 +46,9 @@ def print_status(status):
sys.stdout.write('%-8s%-8s%-8s%-40s\r' % (status['mode'], status[status['mode']]['value'], status[status['mode']]['step'], status['message']))
-def publish_status(broadcaster, status):
+def publish_status(node, broadcaster, status):
static_transformStamped = geometry_msgs.msg.TransformStamped()
- static_transformStamped.header.stamp = rospy.Time.now()
+ static_transformStamped.header.stamp = node.get_clock().now().to_msg()
static_transformStamped.header.frame_id = from_cam
static_transformStamped.child_frame_id = to_cam
@@ -56,7 +56,7 @@ def publish_status(broadcaster, status):
static_transformStamped.transform.translation.y = status['y']['value']
static_transformStamped.transform.translation.z = status['z']['value']
- quat = tf.transformations.quaternion_from_euler(math.radians(status['roll']['value']),
+ quat = quaternion_from_euler(math.radians(status['roll']['value']),
math.radians(status['pitch']['value']),
math.radians(status['azimuth']['value']))
static_transformStamped.transform.rotation.x = quat[0]
@@ -68,24 +68,22 @@ def publish_status(broadcaster, status):
if __name__ == '__main__':
if len(sys.argv) < 3:
- print 'USAGE:'
- print 'set_cams_transforms.py from_cam to_cam x y z azimuth pitch roll'
- print 'x, y, z: in meters'
- print 'azimuth, pitch, roll: in degrees'
- print
- print 'If parameters are not given read last used parameters.'
- print
- print '[OPTIONS]'
- print '--file : if given, default values are loaded from file'
+ print ('USAGE:')
+ print ('set_cams_transforms.py from_cam to_cam x y z azimuth pitch roll')
+ print ('x, y, z: in meters')
+ print ('azimuth, pitch, roll: in degrees')
+ print ('If parameters are not given read last used parameters.')
+ print ('[OPTIONS]')
+ print ('--file : if given, default values are loaded from file')
sys.exit(-1)
from_cam, to_cam = sys.argv[1:3]
try:
filename = sys.argv[sys.argv.index('--file')+1]
- print 'Using file %s' % os.path.abspath(filename)
+ print ('Using file %s' % os.path.abspath(filename))
except:
filename = os.path.join(os.path.dirname(__file__), '_set_cams_info_file.txt')
- print 'Using default file %s' % os.path.abspath(filename)
+ print ('Using default file %s' % os.path.abspath(filename))
if len(sys.argv) >= 9:
x, y, z, yaw, pitch, roll = [float(arg) for arg in sys.argv[3:10]]
@@ -97,37 +95,37 @@ def publish_status(broadcaster, status):
'pitch': {'value': pitch, 'step': 1},
'roll': {'value': roll, 'step': 1},
'message': ''}
- print 'Use given initial values.'
+ print ('Use given initial values.')
else:
try:
status = json.load(open(filename, 'r'))
- print 'Read initial values from file.'
+ print ('Read initial values from file.')
except IOError as e:
- print 'Failed reading initial parameters from file %s' % filename
- print 'Initial parameters must be given for initial run or if an un-initialized file has been given.'
+ print ('Failed reading initial parameters from file %s' % filename)
+ print ('Initial parameters must be given for initial run or if an un-initialized file has been given.')
sys.exit(-1)
- rospy.init_node('my_static_tf2_broadcaster')
- broadcaster = tf2_ros.StaticTransformBroadcaster()
+ rclpy.init()
+ node = Node('my_static_tf2_broadcaster')
+ #rospy.init_node('my_static_tf2_broadcaster')
+ broadcaster = StaticTransformBroadcaster(node)
print
- print 'Press the following keys to change mode: x, y, z, (a)zimuth, (p)itch, (r)oll'
- print 'For each mode, press 6 to increase by step and 4 to decrease'
- print 'Press + to multiply step by 2 or - to divide'
- print
- print 'Press Q to quit'
- print
+ print ('Press the following keys to change mode: x, y, z, (a)zimuth, (p)itch, (r)oll')
+ print ('For each mode, press 6 to increase by step and 4 to decrease')
+ print ('Press + to multiply step by 2 or - to divide')
+ print ('Press Q to quit')
status_keys = [key[0] for key in status.keys()]
- print '%-8s%-8s%-8s%s' % ('Mode', 'value', 'step', 'message')
+ print ('%-8s%-8s%-8s%s' % ('Mode', 'value', 'step', 'message'))
print_status(status)
- publish_status(broadcaster, status)
+ publish_status(node, broadcaster, status)
while True:
kk = getch()
status['message'] = ''
try:
key_idx = status_keys.index(kk)
- status['mode'] = status.keys()[key_idx]
+ status['mode'] = list(status.keys())[key_idx]
except ValueError as e:
if kk.upper() == 'Q':
sys.stdout.write('\n')
@@ -144,7 +142,7 @@ def publish_status(broadcaster, status):
status['message'] = 'Invalid key:' + kk
print_status(status)
- publish_status(broadcaster, status)
+ publish_status(node, broadcaster, status)
json.dump(status, open(filename, 'w'), indent=4)
#rospy.spin()
diff --git a/realsense2_camera/scripts/show_center_depth.py b/realsense2_camera/scripts/show_center_depth.py
index fb957664f4..acf8ac0db2 100644
--- a/realsense2_camera/scripts/show_center_depth.py
+++ b/realsense2_camera/scripts/show_center_depth.py
@@ -31,8 +31,6 @@ def __init__(self, depth_image_topic, depth_info_topic):
self.bridge = CvBridge()
self.sub = self.create_subscription(msg_Image, depth_image_topic, self.imageDepthCallback, 1)
self.sub_info = self.create_subscription(CameraInfo, depth_info_topic, self.imageDepthInfoCallback, 1)
- confidence_topic = depth_image_topic.replace('depth', 'confidence')
- self.sub_conf = self.create_subscription(msg_Image, confidence_topic, self.confidenceCallback, 1)
self.intrinsics = None
self.pix = None
self.pix_grade = None
@@ -62,17 +60,6 @@ def imageDepthCallback(self, data):
except ValueError as e:
return
- def confidenceCallback(self, data):
- try:
- cv_image = self.bridge.imgmsg_to_cv2(data, data.encoding)
- grades = np.bitwise_and(cv_image >> 4, 0x0f)
- if (self.pix):
- self.pix_grade = grades[self.pix[1], self.pix[0]]
- except CvBridgeError as e:
- print(e)
- return
-
-
def imageDepthInfoCallback(self, cameraInfo):
try:
@@ -106,7 +93,6 @@ def main():
print ('Application subscribes to %s and %s topics.' % (depth_image_topic, depth_info_topic))
print ('Application then calculates and print the range to the closest object.')
print ('If intrinsics data is available, it also prints the 3D location of the object')
- print ('If a confedence map is also available in the topic %s, it also prints the confidence grade.' % depth_image_topic.replace('depth', 'confidence'))
print ()
listener = ImageListener(depth_image_topic, depth_info_topic)
diff --git a/realsense2_camera/src/base_realsense_node.cpp b/realsense2_camera/src/base_realsense_node.cpp
index a3cbec4a5d..28228a2f75 100755
--- a/realsense2_camera/src/base_realsense_node.cpp
+++ b/realsense2_camera/src/base_realsense_node.cpp
@@ -24,10 +24,7 @@
// Header files for disabling intra-process comms for static broadcaster.
#include
-// This header file is not available in ROS 2 Dashing.
-#ifndef DASHING
#include
-#endif
using namespace realsense2_camera;
@@ -114,8 +111,9 @@ BaseRealSenseNode::BaseRealSenseNode(rclcpp::Node& node,
_enable_rgbd(ENABLE_RGBD),
_is_color_enabled(false),
_is_depth_enabled(false),
+ _is_accel_enabled(false),
+ _is_gyro_enabled(false),
_pointcloud(false),
- _publish_odom_tf(false),
_imu_sync_method(imu_sync_method::NONE),
_is_profile_changed(false),
_is_align_depth_changed(false)
@@ -418,7 +416,7 @@ void BaseRealSenseNode::imu_callback_sync(rs2::frame frame, imu_sync_method sync
_is_initialized_time_base = setBaseTime(frame_time, frame.get_frame_timestamp_domain());
}
- if (0 != _synced_imu_publisher->getNumSubscribers())
+ if (_synced_imu_publisher && (0 != _synced_imu_publisher->getNumSubscribers()))
{
auto crnt_reading = *(reinterpret_cast(frame.get_data()));
Eigen::Vector3d v(crnt_reading.x, crnt_reading.y, crnt_reading.z);
@@ -501,7 +499,8 @@ void BaseRealSenseNode::imu_callback(rs2::frame frame)
void BaseRealSenseNode::frame_callback(rs2::frame frame)
{
- _synced_imu_publisher->Pause();
+ if (_synced_imu_publisher)
+ _synced_imu_publisher->Pause();
double frame_time = frame.get_timestamp();
// We compute a ROS timestamp which is based on an initial ROS time at point of first frame,
@@ -630,7 +629,8 @@ void BaseRealSenseNode::frame_callback(rs2::frame frame)
publishLabeledPointCloud(frame.as(), t);
publishMetadata(frame, t, OPTICAL_FRAME_ID(sip));
}
- _synced_imu_publisher->Resume();
+ if (_synced_imu_publisher)
+ _synced_imu_publisher->Resume();
} // frame_callback
void BaseRealSenseNode::multiple_message_callback(rs2::frame frame, imu_sync_method sync_method)
@@ -643,9 +643,6 @@ void BaseRealSenseNode::multiple_message_callback(rs2::frame frame, imu_sync_met
if (sync_method > imu_sync_method::NONE) imu_callback_sync(frame, sync_method);
else imu_callback(frame);
break;
- case RS2_STREAM_POSE:
- pose_callback(frame);
- break;
default:
frame_callback(frame);
}
@@ -685,17 +682,7 @@ rclcpp::Time BaseRealSenseNode::frameSystemTimeSec(rs2::frame frame)
{
double elapsed_camera_ns = millisecondsToNanoseconds(timestamp_ms - _camera_time_base);
- /*
- Fixing deprecated-declarations compilation warning.
- Duration(rcl_duration_value_t) is deprecated in favor of
- static Duration::from_nanoseconds(rcl_duration_value_t)
- starting from GALACTIC.
- */
-#if defined(FOXY) || defined(ELOQUENT) || defined(DASHING)
- auto duration = rclcpp::Duration(elapsed_camera_ns);
-#else
auto duration = rclcpp::Duration::from_nanoseconds(elapsed_camera_ns);
-#endif
return rclcpp::Time(_ros_time_base + duration);
}
@@ -801,7 +788,7 @@ void BaseRealSenseNode::updateExtrinsicsCalibData(const rs2::video_stream_profil
void BaseRealSenseNode::SetBaseStream()
{
- const std::vector base_stream_priority = {DEPTH, POSE};
+ const std::vector base_stream_priority = {DEPTH};
std::set checked_sips;
std::map available_profiles;
for(auto&& sensor : _available_ros_sensors)
@@ -831,7 +818,7 @@ void BaseRealSenseNode::SetBaseStream()
void BaseRealSenseNode::publishPointCloud(rs2::points pc, const rclcpp::Time& t, const rs2::frameset& frameset)
{
- std::string frame_id = (_align_depth_filter->is_enabled() ? OPTICAL_FRAME_ID(COLOR) : OPTICAL_FRAME_ID(DEPTH));
+ std::string frame_id = OPTICAL_FRAME_ID(DEPTH);
_pc_filter->Publish(pc, t, frameset, frame_id);
}
@@ -963,7 +950,6 @@ bool BaseRealSenseNode::fillROSImageMsgAndReturnStatus(
img_msg_ptr->width = width;
img_msg_ptr->is_bigendian = false;
img_msg_ptr->step = width * cv_matrix_image.elemSize();
-
return true;
}
@@ -1225,10 +1211,18 @@ void BaseRealSenseNode::startDiagnosticsUpdater()
{
for (rs2_option option : _monitor_options)
{
- if (sensor->supports(option))
+ try
+ {
+ if (sensor->supports(option))
+ {
+ status.add(rs2_option_to_string(option), sensor->get_option(option));
+ got_temperature = true;
+ }
+ }
+ catch(const std::exception& ex)
{
- status.add(rs2_option_to_string(option), sensor->get_option(option));
- got_temperature = true;
+ got_temperature = false;
+ ROS_WARN_STREAM("An error has occurred during monitoring: " << ex.what());
}
}
if (got_temperature) break;
diff --git a/realsense2_camera/src/dynamic_params.cpp b/realsense2_camera/src/dynamic_params.cpp
index 618da0766a..45db52d90a 100644
--- a/realsense2_camera/src/dynamic_params.cpp
+++ b/realsense2_camera/src/dynamic_params.cpp
@@ -25,6 +25,8 @@ namespace realsense2_camera
_params_backend.add_on_set_parameters_callback(
[this](const std::vector & parameters)
{
+ rcl_interfaces::msg::SetParametersResult result;
+ result.successful = true;
for (const auto & parameter : parameters)
{
try
@@ -43,15 +45,15 @@ namespace realsense2_camera
}
}
}
- catch(const std::out_of_range& e)
- {}
catch(const std::exception& e)
{
- std::cerr << e.what() << ":" << parameter.get_name() << '\n';
+ result.successful = false;
+ result.reason = e.what();
+ ROS_WARN_STREAM("Set parameter {" << parameter.get_name()
+ << "} failed: " << e.what());
}
}
- rcl_interfaces::msg::SetParametersResult result;
- result.successful = true;
+
return result;
});
monitor_update_functions(); // Start parameters update thread
@@ -113,11 +115,7 @@ namespace realsense2_camera
try
{
ROS_DEBUG_STREAM("setParam::Setting parameter: " << param_name);
-#if defined(DASHING) || defined(ELOQUENT) || defined(FOXY)
- //do nothing for old versions
-#else
- descriptor.dynamic_typing=true; // Without this, undeclare_parameter() throws in Galactic onward.
-#endif
+ descriptor.dynamic_typing=true; // Without this, undeclare_parameter() throws error.
if (!_node.get_parameter(param_name, result_value))
{
result_value = _node.declare_parameter(param_name, initial_value, descriptor);
@@ -153,7 +151,14 @@ namespace realsense2_camera
};
if (result_value != initial_value && func)
{
- func(rclcpp::Parameter(param_name, result_value));
+ try
+ {
+ func(rclcpp::Parameter(param_name, result_value));
+ }
+ catch(const std::exception& e)
+ {
+ ROS_WARN_STREAM("Set parameter {" << param_name << "} failed: " << e.what());
+ }
}
return result_value;
}
diff --git a/realsense2_camera/src/named_filter.cpp b/realsense2_camera/src/named_filter.cpp
index 729770b605..31fd80a8cb 100644
--- a/realsense2_camera/src/named_filter.cpp
+++ b/realsense2_camera/src/named_filter.cpp
@@ -44,7 +44,7 @@ void NamedFilter::clearParameters()
{
auto name = _parameters_names.back();
_params.getParameters()->removeParam(name);
- _parameters_names.pop_back();
+ _parameters_names.pop_back();
}
}
@@ -117,12 +117,12 @@ void PointcloudFilter::setParameters()
});
}
-void PointcloudFilter::setPublisher()
-{
+void PointcloudFilter::setPublisher()
+{
std::lock_guard lock_guard(_mutex_publisher);
if ((_is_enabled) && (!_pointcloud_publisher))
{
- _pointcloud_publisher = _node.create_publisher("depth/color/points",
+ _pointcloud_publisher = _node.create_publisher("~/depth/color/points",
rclcpp::QoS(rclcpp::QoSInitialization::from_rmw(qos_string_to_qos(_pointcloud_qos)),
qos_string_to_qos(_pointcloud_qos)));
}
@@ -156,8 +156,8 @@ void PointcloudFilter::Publish(rs2::points pc, const rclcpp::Time& t, const rs2:
if (use_texture)
{
std::set available_formats{ rs2_format::RS2_FORMAT_RGB8, rs2_format::RS2_FORMAT_Y8 };
-
- texture_frame_itr = std::find_if(frameset.begin(), frameset.end(), [&texture_source_id, &available_formats] (rs2::frame f)
+
+ texture_frame_itr = std::find_if(frameset.begin(), frameset.end(), [&texture_source_id, &available_formats] (rs2::frame f)
{return (rs2_stream(f.get_profile().stream_type()) == texture_source_id) &&
(available_formats.find(f.get_profile().format()) != available_formats.end()); });
if (texture_frame_itr == frameset.end())
@@ -181,7 +181,7 @@ void PointcloudFilter::Publish(rs2::points pc, const rclcpp::Time& t, const rs2:
sensor_msgs::msg::PointCloud2::UniquePtr msg_pointcloud = std::make_unique();
sensor_msgs::PointCloud2Modifier modifier(*msg_pointcloud);
- modifier.setPointCloud2FieldsByString(1, "xyz");
+ modifier.setPointCloud2FieldsByString(1, "xyz");
modifier.resize(pc.size());
if (_ordered_pc)
{
@@ -266,7 +266,7 @@ void PointcloudFilter::Publish(rs2::points pc, const rclcpp::Time& t, const rs2:
*iter_x = vertex->x;
*iter_y = vertex->y;
*iter_z = vertex->z;
-
+
++iter_x; ++iter_y; ++iter_z;
++valid_count;
}
@@ -289,7 +289,7 @@ void PointcloudFilter::Publish(rs2::points pc, const rclcpp::Time& t, const rs2:
}
-AlignDepthFilter::AlignDepthFilter(std::shared_ptr filter,
+AlignDepthFilter::AlignDepthFilter(std::shared_ptr filter,
std::function update_align_depth_func,
std::shared_ptr parameters, rclcpp::Logger logger, bool is_enabled):
NamedFilter(filter, parameters, logger, is_enabled, false)
diff --git a/realsense2_camera/src/parameters.cpp b/realsense2_camera/src/parameters.cpp
index 16965f27cf..72523cb801 100644
--- a/realsense2_camera/src/parameters.cpp
+++ b/realsense2_camera/src/parameters.cpp
@@ -23,6 +23,7 @@ void BaseRealSenseNode::getParameters()
ROS_INFO("getParameters...");
std::string param_name;
+
param_name = std::string("camera_name");
_camera_name = _parameters->setParam(param_name, "camera");
_parameters_names.push_back(param_name);
@@ -77,10 +78,6 @@ void BaseRealSenseNode::getParameters()
_hold_back_imu_for_frames = _parameters->setParam(param_name, HOLD_BACK_IMU_FOR_FRAMES);
_parameters_names.push_back(param_name);
- param_name = std::string("publish_odom_tf");
- _publish_odom_tf = _parameters->setParam(param_name, PUBLISH_ODOM_TF);
- _parameters_names.push_back(param_name);
-
param_name = std::string("base_frame_id");
_base_frame_id = _parameters->setParam(param_name, DEFAULT_BASE_FRAME_ID);
_base_frame_id = (static_cast(std::ostringstream() << _camera_name << "_" << _base_frame_id)).str();
@@ -135,6 +132,8 @@ void BaseRealSenseNode::setDynamicParams()
[this](const rclcpp::Parameter& parameter)
{
_imu_sync_method = imu_sync_method(parameter.get_value());
+ ROS_WARN("For the 'unite_imu_method' param update to take effect, "
+ "re-enable either gyro or accel stream.");
}, crnt_descriptor);
_parameters_names.push_back(param_name);
}
diff --git a/realsense2_camera/src/profile_manager.cpp b/realsense2_camera/src/profile_manager.cpp
index beffd2d574..95dadbbad6 100644
--- a/realsense2_camera/src/profile_manager.cpp
+++ b/realsense2_camera/src/profile_manager.cpp
@@ -591,20 +591,3 @@ void MotionProfilesManager::registerFPSParams()
}
}
-///////////////////////////////////////////////////////////////////////////////////////
-
-void PoseProfilesManager::registerProfileParameters(std::vector all_profiles, std::function update_sensor_func)
-{
- std::set checked_sips;
- for (auto& profile : all_profiles)
- {
- if (!profile.is()) continue;
- _all_profiles.push_back(profile);
- stream_index_pair sip(profile.stream_type(), profile.stream_index());
- checked_sips.insert(sip);
- }
- registerSensorUpdateParam("enable_%s", checked_sips, _enabled_profiles, true, update_sensor_func);
- registerSensorUpdateParam("%s_fps", checked_sips, _fps, 0, update_sensor_func);
- registerSensorQOSParam("%s_qos", checked_sips, _profiles_image_qos_str, HID_QOS);
- registerSensorQOSParam("%s_info_qos", checked_sips, _profiles_info_qos_str, DEFAULT_QOS);
-}
diff --git a/realsense2_camera/src/realsense_node_factory.cpp b/realsense2_camera/src/realsense_node_factory.cpp
index d913da5c5e..b533b8649e 100644
--- a/realsense2_camera/src/realsense_node_factory.cpp
+++ b/realsense2_camera/src/realsense_node_factory.cpp
@@ -31,7 +31,7 @@ using namespace realsense2_camera;
constexpr auto realsense_ros_camera_version = REALSENSE_ROS_EMBEDDED_VERSION_STR;
RealSenseNodeFactory::RealSenseNodeFactory(const rclcpp::NodeOptions & node_options) :
- Node("camera", "/", node_options),
+ Node("camera", "/camera", node_options),
_logger(this->get_logger())
{
init();
@@ -358,8 +358,6 @@ void RealSenseNodeFactory::startDevice()
{
switch(pid)
{
- case SR300_PID:
- case SR300v2_PID:
case RS400_PID:
case RS405_PID:
case RS410_PID:
@@ -377,9 +375,6 @@ void RealSenseNodeFactory::startDevice()
case RS457_PID:
case RS465_PID:
case RS_USB2_PID:
- case RS_L515_PID_PRE_PRQ:
- case RS_L515_PID:
- case RS_L535_PID:
case RS_D585_PID:
case RS_D585S_PID:
_realSenseNode = std::unique_ptr(new BaseRealSenseNode(*this, _device, _parameters, this->get_node_options().use_intra_process_comms()));
diff --git a/realsense2_camera/src/ros_param_backend_foxy.cpp b/realsense2_camera/src/ros_param_backend.cpp
similarity index 100%
rename from realsense2_camera/src/ros_param_backend_foxy.cpp
rename to realsense2_camera/src/ros_param_backend.cpp
diff --git a/realsense2_camera/src/ros_param_backend_dashing.cpp b/realsense2_camera/src/ros_param_backend_dashing.cpp
deleted file mode 100644
index 8ecf8e7e8f..0000000000
--- a/realsense2_camera/src/ros_param_backend_dashing.cpp
+++ /dev/null
@@ -1,34 +0,0 @@
-// Copyright 2023 Intel Corporation. All Rights Reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "ros_param_backend.h"
-
-namespace realsense2_camera
-{
- void ParametersBackend::add_on_set_parameters_callback(ros2_param_callback_type callback)
- {
- rclcpp::Node::OnParametersSetCallbackType prev_callback = _node.set_on_parameters_set_callback(callback);
- if (prev_callback)
- {
- rclcpp::Node::OnParametersSetCallbackType prev_callback = _node.set_on_parameters_set_callback(prev_callback);
- std::stringstream msg;
- msg << "Cannot set another callback to current node: " << _node.get_name();
- throw std::runtime_error(msg.str());
- }
- }
-
- ParametersBackend::~ParametersBackend()
- {
- }
-}
diff --git a/realsense2_camera/src/ros_param_backend_rolling.cpp b/realsense2_camera/src/ros_param_backend_rolling.cpp
deleted file mode 100644
index e3998e9808..0000000000
--- a/realsense2_camera/src/ros_param_backend_rolling.cpp
+++ /dev/null
@@ -1,32 +0,0 @@
-// Copyright 2023 Intel Corporation. All Rights Reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "ros_param_backend.h"
-
-namespace realsense2_camera
-{
- void ParametersBackend::add_on_set_parameters_callback(ros2_param_callback_type callback)
- {
- _ros_callback = _node.add_on_set_parameters_callback(callback);
- }
-
- ParametersBackend::~ParametersBackend()
- {
- if (_ros_callback)
- {
- _node.remove_on_set_parameters_callback((rclcpp::node_interfaces::OnSetParametersCallbackHandle*)(_ros_callback.get()));
- _ros_callback.reset();
- }
- }
-}
diff --git a/realsense2_camera/src/ros_sensor.cpp b/realsense2_camera/src/ros_sensor.cpp
index d860a79dec..dfd5daed99 100644
--- a/realsense2_camera/src/ros_sensor.cpp
+++ b/realsense2_camera/src/ros_sensor.cpp
@@ -85,6 +85,14 @@ RosSensor::~RosSensor()
void RosSensor::setParameters(bool is_rosbag_file)
{
std::string module_name = create_graph_resource_name(rs2_to_ros(get_info(RS2_CAMERA_INFO_NAME)));
+
+ // From FW version 5.14.x.x, if HDR is enabled, updating UVC controls like exposure, gain , etc are restricted.
+ // So, during init of the node, forcefully disabling the HDR upfront and update it with new values.
+ if((!is_rosbag_file) && supports(RS2_OPTION_HDR_ENABLED))
+ {
+ set_option(RS2_OPTION_HDR_ENABLED, false);
+ }
+
_params.registerDynamicOptions(*this, module_name);
// for rosbag files, don't set hdr(sequence_id) / gain / exposure options
@@ -102,6 +110,24 @@ void RosSensor::UpdateSequenceIdCallback()
if (!supports(RS2_OPTION_SEQUENCE_ID))
return;
+ bool is_hdr_enabled = static_cast(get_option(RS2_OPTION_HDR_ENABLED));
+
+ // Deleter to revert back the RS2_OPTION_HDR_ENABLED value at the end.
+ auto deleter_to_revert_hdr = std::unique_ptr>(&is_hdr_enabled,
+ [&](bool* enable_back_hdr) {
+ if (enable_back_hdr && *enable_back_hdr)
+ {
+ set_option(RS2_OPTION_HDR_ENABLED, true);
+ }
+ });
+
+ // From FW version 5.14.x.x, if HDR is enabled, updating UVC controls like exposure, gain , etc are restricted.
+ // So, disable it before updating.
+ if (is_hdr_enabled)
+ {
+ set_option(RS2_OPTION_HDR_ENABLED, false);
+ }
+
int original_seq_id = static_cast(get_option(RS2_OPTION_SEQUENCE_ID)); // To Set back to default.
std::string module_name = create_graph_resource_name(rs2_to_ros(get_info(RS2_CAMERA_INFO_NAME)));
@@ -136,8 +162,8 @@ void RosSensor::UpdateSequenceIdCallback()
{
set_option(RS2_OPTION_SEQUENCE_ID, parameter.get_value());
std::vector > funcs;
- funcs.push_back([this](){set_sensor_parameter_to_ros(RS2_OPTION_GAIN);});
- funcs.push_back([this](){set_sensor_parameter_to_ros(RS2_OPTION_EXPOSURE);});
+ funcs.push_back([this](){set_sensor_parameter_to_ros(RS2_OPTION_GAIN);});
+ funcs.push_back([this](){set_sensor_parameter_to_ros(RS2_OPTION_EXPOSURE);});
_params.getParameters()->pushUpdateFunctions(funcs);
});
}
@@ -146,14 +172,14 @@ void RosSensor::UpdateSequenceIdCallback()
ROS_WARN_STREAM("Setting alternative callback: Failed to set parameter:" << option_name << " : " << e.what());
return;
}
-
}
+template
void RosSensor::set_sensor_parameter_to_ros(rs2_option option)
{
std::string module_name = create_graph_resource_name(rs2_to_ros(get_info(RS2_CAMERA_INFO_NAME)));
const std::string option_name(module_name + "." + create_graph_resource_name(rs2_option_to_string(option)));
- float value = get_option(option);
+ auto value = static_cast(get_option(option));
_params.getParameters()->setRosParamValue(option_name, &value);
}
@@ -176,12 +202,6 @@ void RosSensor::registerSensorParameters()
{
_profile_managers.push_back(profile_manager);
}
- profile_manager = std::make_shared(_params.getParameters(), _logger);
- profile_manager->registerProfileParameters(all_profiles, _update_sensor_func);
- if (profile_manager->isTypeExist())
- {
- _profile_managers.push_back(profile_manager);
- }
}
void RosSensor::runFirstFrameInitialization()
diff --git a/realsense2_camera/src/ros_utils.cpp b/realsense2_camera/src/ros_utils.cpp
index b4661126cf..0ee2172904 100644
--- a/realsense2_camera/src/ros_utils.cpp
+++ b/realsense2_camera/src/ros_utils.cpp
@@ -111,10 +111,8 @@ static const rmw_qos_profile_t rmw_qos_profile_latched =
const rmw_qos_profile_t qos_string_to_qos(std::string str)
{
-#if !defined(DASHING) && !defined(ELOQUENT)
if (str == "UNKNOWN")
return rmw_qos_profile_unknown;
-#endif
if (str == "SYSTEM_DEFAULT")
return rmw_qos_profile_system_default;
if (str == "DEFAULT")
@@ -133,10 +131,8 @@ const rmw_qos_profile_t qos_string_to_qos(std::string str)
const std::string list_available_qos_strings()
{
std::stringstream res;
-#ifndef DASHING
- res << "UNKNOWN" << "\n";
-#endif
- res << "SYSTEM_DEFAULT" << "\n"
+ res << "UNKNOWN" << "\n"
+ << "SYSTEM_DEFAULT" << "\n"
<< "DEFAULT" << "\n"
<< "PARAMETER_EVENTS" << "\n"
<< "SERVICES_DEFAULT" << "\n"
diff --git a/realsense2_camera/src/rs_node_setup.cpp b/realsense2_camera/src/rs_node_setup.cpp
index 6090c94b62..58bc1dd6e4 100755
--- a/realsense2_camera/src/rs_node_setup.cpp
+++ b/realsense2_camera/src/rs_node_setup.cpp
@@ -27,18 +27,12 @@ void BaseRealSenseNode::setup()
setAvailableSensors();
SetBaseStream();
setupFilters();
- setupFiltersPublishers();
setCallbackFunctions();
monitoringProfileChanges();
updateSensors();
publishServices();
}
-void BaseRealSenseNode::setupFiltersPublishers()
-{
- _synced_imu_publisher = std::make_shared(_node.create_publisher("imu", 5));
-}
-
void BaseRealSenseNode::monitoringProfileChanges()
{
int time_interval(10000);
@@ -108,12 +102,12 @@ void BaseRealSenseNode::setAvailableSensors()
ROS_INFO_STREAM("Device Product ID: 0x" << pid);
ROS_INFO_STREAM("Sync Mode: " << ((_sync_frames)?"On":"Off"));
-
+
std::function frame_callback_function = [this](rs2::frame frame){
bool is_filter(_filters.end() != find_if(_filters.begin(), _filters.end(), [](std::shared_ptr f){return (f->is_enabled()); }));
if (_sync_frames || is_filter)
this->_asyncer.invoke(frame);
- else
+ else
frame_callback(frame);
};
@@ -143,7 +137,6 @@ void BaseRealSenseNode::setAvailableSensors()
std::unique_ptr rosSensor;
if (sensor.is() ||
sensor.is() ||
- sensor.is() ||
sensor.is() ||
sensor.is())
{
@@ -155,11 +148,6 @@ void BaseRealSenseNode::setAvailableSensors()
ROS_DEBUG_STREAM("Set " << module_name << " as ImuSensor.");
rosSensor = std::make_unique(sensor, _parameters, imu_callback_function, update_sensor_func, hardware_reset_func, _diagnostics_updater, _logger, false, _dev.is());
}
- else if (sensor.is())
- {
- ROS_DEBUG_STREAM("Set " << module_name << " as PoseSensor.");
- rosSensor = std::make_unique(sensor, _parameters, multiple_message_callback_function, update_sensor_func, hardware_reset_func, _diagnostics_updater, _logger, false, _dev.is());
- }
else
{
ROS_WARN_STREAM("Module Name \"" << module_name << "\" does not define a callback.");
@@ -196,6 +184,9 @@ void BaseRealSenseNode::stopPublishers(const std::vector& profil
}
else if (profile.is())
{
+ _is_accel_enabled = false;
+ _is_gyro_enabled = false;
+ _synced_imu_publisher.reset();
_imu_publishers.erase(sip);
_imu_info_publishers.erase(sip);
}
@@ -234,8 +225,11 @@ void BaseRealSenseNode::startPublishers(const std::vector& profi
if (sensor.rs2::sensor::is())
rectified_image = true;
- image_raw << stream_name << "/image_" << ((rectified_image)?"rect_":"") << "raw";
- camera_info << stream_name << "/camera_info";
+ // adding "~/" to the topic name will add node namespace and node name to the topic
+ // see "Private Namespace Substitution Character" section on https://design.ros2.org/articles/topic_and_service_names.html
+ image_raw << "~/" << stream_name << "/image_" << ((rectified_image)?"rect_":"") << "raw";
+ camera_info << "~/" << stream_name << "/camera_info";
+
// We can use 2 types of publishers:
// Native RCL publisher that support intra-process zero-copy comunication
@@ -260,8 +254,8 @@ void BaseRealSenseNode::startPublishers(const std::vector& profi
if (_align_depth_filter->is_enabled() && (sip != DEPTH) && sip.second < 2)
{
std::stringstream aligned_image_raw, aligned_camera_info;
- aligned_image_raw << "aligned_depth_to_" << stream_name << "/image_raw";
- aligned_camera_info << "aligned_depth_to_" << stream_name << "/camera_info";
+ aligned_image_raw << "~/" << "aligned_depth_to_" << stream_name << "/image_raw";
+ aligned_camera_info << "~/" << "aligned_depth_to_" << stream_name << "/camera_info";
std::string aligned_stream_name = "aligned_depth_to_" + stream_name;
@@ -291,28 +285,26 @@ void BaseRealSenseNode::startPublishers(const std::vector& profi
}
else if (profile.is())
{
+ if(profile.stream_type() == RS2_STREAM_ACCEL)
+ _is_accel_enabled = true;
+ else if (profile.stream_type() == RS2_STREAM_GYRO)
+ _is_gyro_enabled = true;
+
std::stringstream data_topic_name, info_topic_name;
- data_topic_name << stream_name << "/sample";
+ data_topic_name << "~/" << stream_name << "/sample";
_imu_publishers[sip] = _node.create_publisher(data_topic_name.str(),
rclcpp::QoS(rclcpp::QoSInitialization::from_rmw(qos), qos));
// Publish Intrinsics:
- info_topic_name << stream_name << "/imu_info";
+ info_topic_name << "~/" << stream_name << "/imu_info";
_imu_info_publishers[sip] = _node.create_publisher(info_topic_name.str(),
rclcpp::QoS(rclcpp::QoSInitialization::from_rmw(info_qos), info_qos));
IMUInfo info_msg = getImuInfo(profile);
_imu_info_publishers[sip]->publish(info_msg);
}
- else if (profile.is())
- {
- std::stringstream data_topic_name, info_topic_name;
- data_topic_name << stream_name << "/sample";
- _odom_publisher = _node.create_publisher(data_topic_name.str(),
- rclcpp::QoS(rclcpp::QoSInitialization::from_rmw(qos), qos));
- }
- std::string topic_metadata(stream_name + "/metadata");
+ std::string topic_metadata("~/" + stream_name + "/metadata");
_metadata_publishers[sip] = _node.create_publisher(topic_metadata,
rclcpp::QoS(rclcpp::QoSInitialization::from_rmw(info_qos), info_qos));
-
+
if (!((rs2::stream_profile)profile==(rs2::stream_profile)_base_profile))
{
@@ -320,12 +312,20 @@ void BaseRealSenseNode::startPublishers(const std::vector& profi
rclcpp::PublisherOptionsWithAllocator> options;
options.use_intra_process_comm = rclcpp::IntraProcessSetting::Disable;
rmw_qos_profile_t extrinsics_qos = rmw_qos_profile_latched;
-
- std::string topic_extrinsics("extrinsics/" + create_graph_resource_name(ros_stream_to_string(_base_profile.stream_type()) + "_to_" + stream_name));
- _extrinsics_publishers[sip] = _node.create_publisher(topic_extrinsics,
+
+ std::string topic_extrinsics("~/extrinsics/" + create_graph_resource_name(ros_stream_to_string(_base_profile.stream_type()) + "_to_" + stream_name));
+ _extrinsics_publishers[sip] = _node.create_publisher(topic_extrinsics,
rclcpp::QoS(rclcpp::QoSInitialization::from_rmw(extrinsics_qos), extrinsics_qos), std::move(options));
}
}
+ if (_is_accel_enabled && _is_gyro_enabled && (_imu_sync_method > imu_sync_method::NONE))
+ {
+ rmw_qos_profile_t qos = _use_intra_process ? qos_string_to_qos(DEFAULT_QOS) : qos_string_to_qos(HID_QOS);
+
+ _synced_imu_publisher = std::make_shared(_node.create_publisher("~/imu",
+ rclcpp::QoS(rclcpp::QoSInitialization::from_rmw(qos), qos)));
+ }
+
}
void BaseRealSenseNode::startRGBDPublisherIfNeeded()
@@ -337,7 +337,9 @@ void BaseRealSenseNode::startRGBDPublisherIfNeeded()
{
rmw_qos_profile_t qos = _use_intra_process ? qos_string_to_qos(DEFAULT_QOS) : qos_string_to_qos(IMAGE_QOS);
- _rgbd_publisher = _node.create_publisher("rgbd",
+ // adding "~/" to the topic name will add node namespace and node name to the topic
+ // see "Private Namespace Substitution Character" section on https://design.ros2.org/articles/topic_and_service_names.html
+ _rgbd_publisher = _node.create_publisher("~/rgbd",
rclcpp::QoS(rclcpp::QoSInitialization::from_rmw(qos), qos));
}
else {
@@ -348,7 +350,7 @@ void BaseRealSenseNode::startRGBDPublisherIfNeeded()
}
void BaseRealSenseNode::updateSensors()
-{
+{
std::lock_guard lock_guard(_update_sensor_mutex);
try{
for(auto&& sensor : _available_ros_sensors)
@@ -358,7 +360,7 @@ void BaseRealSenseNode::updateSensors()
std::vector wanted_profiles;
bool is_profile_changed(sensor->getUpdatedProfiles(wanted_profiles));
- bool is_video_sensor = (sensor->is() || sensor->is() || sensor->is());
+ bool is_video_sensor = (sensor->is() || sensor->is());
// do all updates if profile has been changed, or if the align depth filter status has been changed
// and we are on a video sensor. TODO: explore better options to monitor and update changes
@@ -424,8 +426,10 @@ void BaseRealSenseNode::updateSensors()
void BaseRealSenseNode::publishServices()
{
+ // adding "~/" to the service name will add node namespace and node name to the service
+ // see "Private Namespace Substitution Character" section on https://design.ros2.org/articles/topic_and_service_names.html
_device_info_srv = _node.create_service(
- "device_info",
+ "~/device_info",
[&](const realsense2_camera_msgs::srv::DeviceInfo::Request::SharedPtr req,
realsense2_camera_msgs::srv::DeviceInfo::Response::SharedPtr res)
{getDeviceInfo(req, res);});
diff --git a/realsense2_camera/src/sensor_params.cpp b/realsense2_camera/src/sensor_params.cpp
index ff0cf151e9..c347a1d979 100644
--- a/realsense2_camera/src/sensor_params.cpp
+++ b/realsense2_camera/src/sensor_params.cpp
@@ -70,14 +70,7 @@ std::map get_enum_method(rs2::options sensor, rs2_option optio
template
void param_set_option(rs2::options sensor, rs2_option option, const rclcpp::Parameter& parameter)
{
- try
- {
- sensor.set_option(option, parameter.get_value());
- }
- catch(const std::exception& e)
- {
- std::cout << "Failed to set value: " << e.what() << std::endl;
- }
+ sensor.set_option(option, parameter.get_value());
}
void SensorParams::clearParameters()
diff --git a/realsense2_camera/src/tfs.cpp b/realsense2_camera/src/tfs.cpp
index 20d6c30619..0c0209dfc7 100644
--- a/realsense2_camera/src/tfs.cpp
+++ b/realsense2_camera/src/tfs.cpp
@@ -31,15 +31,9 @@ void BaseRealSenseNode::restartStaticTransformBroadcaster()
rclcpp::PublisherOptionsWithAllocator> options;
options.use_intra_process_comm = rclcpp::IntraProcessSetting::Disable;
- #ifndef DASHING
_static_tf_broadcaster = std::make_shared(_node,
tf2_ros::StaticBroadcasterQoS(),
std::move(options));
- #else
- _static_tf_broadcaster = std::make_shared(_node,
- rclcpp::QoS(100),
- std::move(options));
- #endif
}
void BaseRealSenseNode::append_static_tf_msg(const rclcpp::Time& t,
@@ -329,90 +323,3 @@ void BaseRealSenseNode::startDynamicTf()
}
}
-void BaseRealSenseNode::pose_callback(rs2::frame frame)
-{
- double frame_time = frame.get_timestamp();
- bool placeholder_false(false);
- if (_is_initialized_time_base.compare_exchange_strong(placeholder_false, true) )
- {
- _is_initialized_time_base = setBaseTime(frame_time, frame.get_frame_timestamp_domain());
- }
-
- ROS_DEBUG("Frame arrived: stream: %s ; index: %d ; Timestamp Domain: %s",
- rs2_stream_to_string(frame.get_profile().stream_type()),
- frame.get_profile().stream_index(),
- rs2_timestamp_domain_to_string(frame.get_frame_timestamp_domain()));
- rs2_pose pose = frame.as().get_pose_data();
- rclcpp::Time t(frameSystemTimeSec(frame));
-
- geometry_msgs::msg::PoseStamped pose_msg;
- pose_msg.pose.position.x = -pose.translation.z;
- pose_msg.pose.position.y = -pose.translation.x;
- pose_msg.pose.position.z = pose.translation.y;
- pose_msg.pose.orientation.x = -pose.rotation.z;
- pose_msg.pose.orientation.y = -pose.rotation.x;
- pose_msg.pose.orientation.z = pose.rotation.y;
- pose_msg.pose.orientation.w = pose.rotation.w;
-
- static tf2_ros::TransformBroadcaster br(_node);
- geometry_msgs::msg::TransformStamped msg;
- msg.header.stamp = t;
- msg.header.frame_id = DEFAULT_ODOM_FRAME_ID;
- msg.child_frame_id = FRAME_ID(POSE);
- msg.transform.translation.x = pose_msg.pose.position.x;
- msg.transform.translation.y = pose_msg.pose.position.y;
- msg.transform.translation.z = pose_msg.pose.position.z;
- msg.transform.rotation.x = pose_msg.pose.orientation.x;
- msg.transform.rotation.y = pose_msg.pose.orientation.y;
- msg.transform.rotation.z = pose_msg.pose.orientation.z;
- msg.transform.rotation.w = pose_msg.pose.orientation.w;
-
- if (_publish_odom_tf) br.sendTransform(msg);
-
- if (0 != _odom_publisher->get_subscription_count())
- {
- double cov_pose(_linear_accel_cov * pow(10, 3-(int)pose.tracker_confidence));
- double cov_twist(_angular_velocity_cov * pow(10, 1-(int)pose.tracker_confidence));
-
- geometry_msgs::msg::Vector3Stamped v_msg;
- tf2::Vector3 tfv(-pose.velocity.z, -pose.velocity.x, pose.velocity.y);
- tf2::Quaternion q(-msg.transform.rotation.x,
- -msg.transform.rotation.y,
- -msg.transform.rotation.z,
- msg.transform.rotation.w);
- tfv=tf2::quatRotate(q,tfv);
- v_msg.vector.x = tfv.x();
- v_msg.vector.y = tfv.y();
- v_msg.vector.z = tfv.z();
-
- tfv = tf2::Vector3(-pose.angular_velocity.z, -pose.angular_velocity.x, pose.angular_velocity.y);
- tfv=tf2::quatRotate(q,tfv);
- geometry_msgs::msg::Vector3Stamped om_msg;
- om_msg.vector.x = tfv.x();
- om_msg.vector.y = tfv.y();
- om_msg.vector.z = tfv.z();
-
- nav_msgs::msg::Odometry odom_msg;
-
- odom_msg.header.frame_id = DEFAULT_ODOM_FRAME_ID;
- odom_msg.child_frame_id = FRAME_ID(POSE);
- odom_msg.header.stamp = t;
- odom_msg.pose.pose = pose_msg.pose;
- odom_msg.pose.covariance = {cov_pose, 0, 0, 0, 0, 0,
- 0, cov_pose, 0, 0, 0, 0,
- 0, 0, cov_pose, 0, 0, 0,
- 0, 0, 0, cov_twist, 0, 0,
- 0, 0, 0, 0, cov_twist, 0,
- 0, 0, 0, 0, 0, cov_twist};
- odom_msg.twist.twist.linear = v_msg.vector;
- odom_msg.twist.twist.angular = om_msg.vector;
- odom_msg.twist.covariance ={cov_pose, 0, 0, 0, 0, 0,
- 0, cov_pose, 0, 0, 0, 0,
- 0, 0, cov_pose, 0, 0, 0,
- 0, 0, 0, cov_twist, 0, 0,
- 0, 0, 0, 0, cov_twist, 0,
- 0, 0, 0, 0, 0, cov_twist};
- _odom_publisher->publish(odom_msg);
- ROS_DEBUG("Publish %s stream", rs2_stream_to_string(frame.get_profile().stream_type()));
- }
-}
diff --git a/realsense2_camera/test/README.md b/realsense2_camera/test/README.md
index 0be7600c24..ea3a043a92 100644
--- a/realsense2_camera/test/README.md
+++ b/realsense2_camera/test/README.md
@@ -146,7 +146,32 @@ The below command finds the test with the name test_static_tf_1 in realsense2_ca
pytest-3 -s -k test_static_tf_1 realsense2_camera/test/
-### Points to be noted while writing pytests
-The tests that are in one file are nromally run in parallel. So if there are multiple tests in one file, the system capacity can influence the test execution. It's recomended to have 3-4 tests in file, more than that can affect the test results due to delays.
-
+### Marking tests as regression tests
+Some of the tests, especially the live tests with multiple runs, for e.g., all profile tests (test_camera_all_profile_tests.py) take a long time. Such tests are marked are skipped with condition so that "colcon test" skips it.
+If a user wants to add a test to this conditional skip, user can add by adding a marker as below.
+
+@pytest.mark.skipif (os.getenv('RS_ROS_REGRESSION', "not found") == "not found",reason="Regression is not enabled, define RS_ROS_REGRESSION")
+
+### Running skipped regression tests
+1. Set the environment variable RS_ROS_REGRESSION as 1 and run the "colcon test"
+2. pytest example:
+ RS_ROS_REGRESSION=1 PYTHONPATH=$PYTHONPATH:$PWD/realsense2_camera/test/utils:$PWD/realsense2_camera//launch:$PWD/realsense2_camera//scripts pytest-3 -s realsense2_camera/test/live_camera/test_camera_aligned_tests.py -k test_camera_align_depth_color_all -m d415
+
+## Points to be noted while writing pytests
+The tests that are in one file are normally run in parallel, there could also be changes in the pytest plugin. So if there are multiple tests in one file, the system capacity can influence the test execution. It's recomended to have 3-4 tests in file, more than that can affect the test results due to delays.
+### Passing/changing parameters
+The parameters passed while creating the node can be initialized individually for each test, please see the test_parameterized_template example for reference. The default values are taken from rs_launch.py and the passed parameters are used for overriding the default values. The parameters that can be dynamically modified can be changed using the param interface provided. However, the function create_param_ifs has to be called to create this interface. Please see the test_d455_basic_tests.py for reference. There are specific functions to change the string, integer and bool parameters, the utils can be extended if any more types are needed.
+### Difference in setting the bool parameters
+There is a difference between setting the default values of bool parameters and setting them dynamically.
+The bool test params are assinged withn quotes as below.
+ test_params_all_profiles_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'enable_accel':"True",
+ 'enable_gyro':"True",
+ 'unite_imu_method':1,
+ }
+
+However the function that implements the setting of bool parameter dynamically takes the python bool datatype. For example:
+ self.set_bool_param('enable_accel', False)
diff --git a/realsense2_camera/test/live_camera/test_camera_aligned_tests.py b/realsense2_camera/test/live_camera/test_camera_aligned_tests.py
new file mode 100644
index 0000000000..0c17e76e47
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_camera_aligned_tests.py
@@ -0,0 +1,279 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+from pytest_rs_utils import launch_descr_with_parameters
+from pytest_rs_utils import delayed_launch_descr_with_parameters
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+import pytest_live_camera_utils
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+from pytest_live_camera_utils import debug_print
+
+
+
+test_params_align_depth_color_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'enable_color':'true',
+ 'enable_depth':'true',
+ 'depth_module.profile':'848x480x30',
+ 'rgb_camera.profile':'640x480x30',
+ 'align_depth.enable':'true'
+ }
+test_params_align_depth_color_d415 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ 'enable_color':'true',
+ 'enable_depth':'true',
+ 'depth_module.profile':'848x480x30',
+ 'rgb_camera.profile':'640x480x30',
+ 'align_depth.enable':'true'
+ }
+'''
+This test was implemented as a template to set the parameters and run the test.
+This directory is not added to the CMakeLists.txt so as to avoid the colcon failure in the
+machines that don't have the D455 connected.
+1. Only a subset of parameter types are implemented in py_rs_utils, it has to be extended for others
+2. After setting the param, rclpy.spin_once may be needed.Test passes even without this though.
+'''
+
+@pytest.mark.parametrize("launch_descr_with_parameters",[
+ pytest.param(test_params_align_depth_color_d455, marks=pytest.mark.d455),
+ pytest.param(test_params_align_depth_color_d415, marks=pytest.mark.d415),
+ #pytest.param(test_params_align_depth_color_d435, marks=pytest.mark.d435),
+ ]
+ ,indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestCamera_AlignDepthColor(pytest_rs_utils.RsTestBaseClass):
+ def test_camera_align_depth_color(self,launch_descr_with_parameters):
+ params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(params['device_type']) == False:
+ print("Device not found? : " + params['device_type'])
+ return
+ themes = [
+ {'topic':get_node_heirarchy(params)+'/color/image_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':1,
+ 'width':640,
+ 'height':480,
+ },
+ {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':1,
+ 'width':848,
+ 'height':480,
+ },
+ {'topic':get_node_heirarchy(params)+'/aligned_depth_to_color/image_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':1,
+ 'width':640,
+ 'height':480,
+ },
+ ]
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ print("Starting camera test...")
+ self.init_test("RsTest"+params['camera_name'])
+ self.wait_for_node(params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(params))
+ ret = self.run_test(themes)
+ assert ret[0], ret[1]
+ assert self.process_data(themes)
+ self.set_string_param('rgb_camera.profile', '1280x720x30')
+ self.set_bool_param('enable_color', True)
+ themes[0]['width'] = 1280
+ themes[0]['height'] = 720
+ themes[2]['width'] = 1280
+ themes[2]['height'] = 720
+
+ ret = self.run_test(themes)
+ assert ret[0], ret[1]
+ assert self.process_data(themes)
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
+
+test_params_all_profiles_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'enable_color':'true',
+ 'enable_depth':'true',
+ 'depth_module.profile':'848x480x30',
+ 'rgb_camera.profile':'640x480x30',
+ 'align_depth.enable':'true'
+ }
+test_params_all_profiles_d415 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ 'enable_color':'true',
+ 'enable_depth':'true',
+ 'depth_module.profile':'848x480x30',
+ 'rgb_camera.profile':'640x480x30',
+ 'align_depth.enable':'true'
+ }
+test_params_all_profiles_d435 = {
+ 'camera_name': 'D435',
+ 'device_type': 'D435',
+ 'enable_color':'true',
+ 'enable_depth':'true',
+ 'depth_module.profile':'848x480x30',
+ 'rgb_camera.profile':'640x480x30',
+ 'align_depth.enable':'true'
+ }
+
+
+'''
+This test was implemented as a template to set the parameters and run the test.
+This directory is not added to the CMakeLists.txt so as to avoid the colcon failure in the
+machines that don't have the D455 connected.
+1. Only a subset of parameter types are implemented in py_rs_utils, it has to be extended for others
+2. After setting the param, rclpy.spin_once may be needed.Test passes even without this though.
+'''
+@pytest.mark.skipif (os.getenv('RS_ROS_REGRESSION', "not found") == "not found",reason="Regression is not enabled, define RS_ROS_REGRESSION")
+@pytest.mark.parametrize("launch_descr_with_parameters", [
+ pytest.param(test_params_all_profiles_d455, marks=pytest.mark.d455),
+ pytest.param(test_params_all_profiles_d415, marks=pytest.mark.d415),
+ pytest.param(test_params_all_profiles_d435, marks=pytest.mark.d435),]
+ ,indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestCamera_AllAlignDepthColor(pytest_rs_utils.RsTestBaseClass):
+ def test_camera_all_align_depth_color(self,launch_descr_with_parameters):
+ skipped_tests = []
+ failed_tests = []
+ num_passed = 0
+ num_failed = 0
+ params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(params['device_type']) == False:
+ print("Device not found? : " + params['device_type'])
+ return
+ themes = [
+ {'topic':get_node_heirarchy(params)+'/color/image_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':1,
+ 'width':640,
+ 'height':480,
+ },
+ {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':1,
+ 'width':848,
+ 'height':480,
+ },
+ {'topic':get_node_heirarchy(params)+'/aligned_depth_to_color/image_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':1,
+ 'width':640,
+ 'height':480,
+ },
+ ]
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ self.init_test("RsTest"+params['camera_name'])
+ self.spin_for_time(wait_time=1.0)
+ cap = pytest_live_camera_utils.get_camera_capabilities(params['device_type'])
+ if cap == None:
+ debug_print("Device not found? : " + params['device_type'])
+ return
+ self.create_param_ifs(get_node_heirarchy(params))
+ color_profiles = set([i[1] for i in cap["color_profile"] if i[2] == "RGB8"])
+ depth_profiles = set([i[1] for i in cap["depth_profile"] if i[0] == "Depth"])
+ for color_profile in color_profiles:
+ for depth_profile in depth_profiles:
+ if depth_profile == color_profile:
+ continue
+ print("Testing the alignment of Depth:", depth_profile, " and Color:", color_profile)
+ self.set_bool_param('enable_color', False)
+ self.set_bool_param('enable_color', False)
+ self.set_bool_param('align_depth.enable', False)
+
+ themes[0]['width'] = themes[2]['width'] = int(color_profile.split('x')[0])
+ themes[0]['height'] = themes[2]['height'] = int(color_profile.split('x')[1])
+ themes[1]['width'] = int(depth_profile.split('x')[0])
+ themes[1]['height'] = int(depth_profile.split('x')[1])
+ dfps = int(depth_profile.split('x')[2])
+ cfps = int(color_profile.split('x')[2])
+ if dfps > cfps:
+ fps = cfps
+ else:
+ fps = dfps
+ timeout=100.0/fps
+ #for the changes to take effect
+ self.spin_for_time(wait_time=timeout/20)
+ self.set_string_param('rgb_camera.profile', color_profile)
+ self.set_string_param('depth_module.profile', depth_profile)
+ self.set_bool_param('enable_color', True)
+ self.set_bool_param('enable_color', True)
+ self.set_bool_param('align_depth.enable', True)
+ #for the changes to take effect
+ self.spin_for_time(wait_time=timeout/20)
+ try:
+ ret = self.run_test(themes, timeout=timeout)
+ assert ret[0], ret[1]
+ assert self.process_data(themes), "".join(str(depth_profile) + " " + str(color_profile)) + " failed"
+ num_passed += 1
+ except Exception as e:
+ exc_type, exc_obj, exc_tb = sys.exc_info()
+ fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
+ print("Test failed")
+ print("Tested the alignment of Depth:", depth_profile, " and Color:", color_profile, " with timeout ", timeout)
+ print(e)
+ print(exc_type, fname, exc_tb.tb_lineno)
+ num_failed += 1
+ failed_tests.append("".join(str(depth_profile) + " " + str(color_profile)))
+
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
+ print("Tests passed " + str(num_passed))
+ print("Tests skipped " + str(len(skipped_tests)))
+ if len(skipped_tests):
+ debug_print("\nSkipped tests:" + params['device_type'])
+ debug_print("\n".join(skipped_tests))
+ print("Tests failed " + str(num_failed))
+ if len(failed_tests):
+ print("\nFailed tests:" + params['device_type'])
+ print("\n".join(failed_tests))
+
+ def disable_all_params(self):
+ self.set_bool_param('enable_color', False)
+ self.set_bool_param('enable_depth', False)
+ self.set_bool_param('enable_infra', False)
+ self.set_bool_param('enable_infra1', False)
+ self.set_bool_param('enable_infra2', False)
+
diff --git a/realsense2_camera/test/live_camera/test_camera_all_profile_tests.py b/realsense2_camera/test/live_camera/test_camera_all_profile_tests.py
new file mode 100644
index 0000000000..b337e6f8d5
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_camera_all_profile_tests.py
@@ -0,0 +1,265 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+from pytest_rs_utils import launch_descr_with_parameters
+from pytest_rs_utils import delayed_launch_descr_with_parameters
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+import pytest_live_camera_utils
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+from pytest_live_camera_utils import debug_print
+def check_if_skip_test(profile, format):
+ '''
+ if profile == 'Color':
+ if "BGRA8" == format:
+ return True
+ if "RGBA8" == format:
+ return True
+ if "Y8" == format:
+ return True
+ elif profile == 'Depth':
+ if "Z16" == format:
+ return True
+ el
+ if profile == 'Infrared':
+ if "Y8" == format:
+ return True
+ if "Y16" == format:
+ return True
+ if "BGRA8" == format:
+ return True
+ if "RGBA8" == format:
+ return True
+ if "Y10BPACK" == format:
+ return True
+ if "UYVY" == format:
+ return True
+ if "BGR8" == format:
+ return True
+ if "RGB8" == format:
+ return True
+ if "RAW10" == format:
+ return True
+ elif profile == 'Infrared1':
+ if "Y8" ==format:
+ return True
+ if "Y16" ==format:
+ return True
+ if "Y10BPACK" == format:
+ return True
+ if "UYVY" ==format:
+ return True
+ if "BGR8" ==format:
+ return True
+ if "RGB8" ==format:
+ return True
+ if "RAW10" ==format:
+ return True
+ if profile == 'Infrared2':
+ if "Y8" == format:
+ return True
+ if "Y16" == format:
+ return True
+ if "Y10BPACK" == format:
+ return True
+ if "UYVY" == format:
+ return True
+ if "BGR8" == format:
+ return True
+ if "RGB8" == format:
+ return True
+ if "RAW10" == format:
+ return True
+ '''
+ if profile == 'Infrared':
+ if "Y8" == format:
+ return True
+ if "Y16" == format:
+ return True
+ if profile == 'Infrared1':
+ if "Y8" ==format:
+ return True
+ if "Y16" ==format:
+ return True
+ if profile == 'Infrared2':
+ if "Y8" == format:
+ return True
+ if "Y16" == format:
+ return True
+ return False
+
+
+test_params_all_profiles_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ }
+test_params_all_profiles_d415 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ }
+test_params_all_profiles_d435 = {
+ 'camera_name': 'D435',
+ 'device_type': 'D435',
+ }
+
+
+'''
+This test was implemented as a template to set the parameters and run the test.
+This directory is not added to the CMakeLists.txt so as to avoid the colcon failure in the
+machines that don't have the D455 connected.
+1. Only a subset of parameter types are implemented in py_rs_utils, it has to be extended for others
+2. After setting the param, rclpy.spin_once may be needed.Test passes even without this though.
+'''
+@pytest.mark.skipif (os.getenv('RS_ROS_REGRESSION', "not found") == "not found",reason="Regression is not enabled, define RS_ROS_REGRESSION")
+@pytest.mark.parametrize("launch_descr_with_parameters", [
+ pytest.param(test_params_all_profiles_d455, marks=pytest.mark.d455),
+ pytest.param(test_params_all_profiles_d415, marks=pytest.mark.d415),
+ pytest.param(test_params_all_profiles_d435, marks=pytest.mark.d435),]
+ ,indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestLiveCamera_Change_Resolution(pytest_rs_utils.RsTestBaseClass):
+ def test_LiveCamera_Change_Resolution(self,launch_descr_with_parameters):
+ skipped_tests = []
+ failed_tests = []
+ num_passed = 0
+ num_failed = 0
+ params = launch_descr_with_parameters[1]
+ themes = [{'topic':get_node_heirarchy(params)+'/color/image_raw', 'msg_type':msg_Image,'expected_data_chunks':1}]
+ config = pytest_live_camera_utils.get_profile_config(get_node_heirarchy(params))
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ self.init_test("RsTest"+params['camera_name'])
+ self.spin_for_time(wait_time=1.0)
+ cap = pytest_live_camera_utils.get_camera_capabilities(params['device_type'])
+ if cap == None:
+ debug_print("Device not found? : " + params['device_type'])
+ return
+ self.create_param_ifs(get_node_heirarchy(params))
+ config["Color"]["default_profile1"],config["Color"]["default_profile2"] = pytest_live_camera_utils.get_default_profiles(cap["color_profile"], "Color")
+ config["Depth"]["default_profile1"],config["Depth"]["default_profile2"] = pytest_live_camera_utils.get_default_profiles(cap["depth_profile"], "Depth")
+ config["Infrared"]["default_profile1"],config["Infrared"]["default_profile2"] = pytest_live_camera_utils.get_default_profiles(cap["depth_profile"], "Infrared")
+ config["Infrared1"]["default_profile1"],config["Infrared1"]["default_profile2"] = pytest_live_camera_utils.get_default_profiles(cap["depth_profile"], "Infrared1")
+ config["Infrared2"]["default_profile1"],config["Infrared2"]["default_profile2"] = pytest_live_camera_utils.get_default_profiles(cap["depth_profile"], "Infrared2")
+ for key in cap["color_profile"]:
+ profile_type = key[0]
+ profile = key[1]
+ format = key[2]
+ if check_if_skip_test(profile_type, format):
+ skipped_tests.append(" ".join(key))
+ continue
+ print("Testing " + " ".join(key))
+ themes[0]['topic'] = config[profile_type]['topic']
+ themes[0]['width'] = int(profile.split('x')[0])
+ themes[0]['height'] = int(profile.split('x')[1])
+ if themes[0]['width'] == int(config[profile_type]["default_profile2"].split('x')[0]):
+ self.set_string_param(config[profile_type]["profile"], config[profile_type]["default_profile1"])
+ else:
+ self.set_string_param(config[profile_type]["profile"], config[profile_type]["default_profile2"])
+ self.set_bool_param(config[profile_type]["param"], True)
+ self.disable_all_params()
+ self.spin_for_time(wait_time=0.2)
+ self.set_string_param(config[profile_type]["profile"], profile)
+ self.set_string_param(config[profile_type]["format"], format)
+ self.set_bool_param(config[profile_type]["param"], True)
+ try:
+ ret = self.run_test(themes)
+ assert ret[0], ret[1]
+ assert self.process_data(themes), " ".join(key) + " failed"
+ num_passed += 1
+ except Exception as e:
+ exc_type, exc_obj, exc_tb = sys.exc_info()
+ fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
+ print("Test failed")
+ print(e)
+ print(exc_type, fname, exc_tb.tb_lineno)
+ num_failed += 1
+ failed_tests.append(" ".join(key))
+ debug_print("Color tests completed")
+ for key in cap["depth_profile"]:
+ profile_type = key[0]
+ profile = key[1]
+ format = key[2]
+ if check_if_skip_test(profile_type, format):
+ skipped_tests.append(" ".join(key))
+ continue
+ print("Testing " + " ".join(key))
+
+ themes[0]['topic'] = config[profile_type]['topic']
+ themes[0]['width'] = int(profile.split('x')[0])
+ themes[0]['height'] = int(profile.split('x')[1])
+ if themes[0]['width'] == int(config[profile_type]["default_profile2"].split('x')[0]):
+ self.set_string_param(config[profile_type]["profile"], config[profile_type]["default_profile1"])
+ else:
+ self.set_string_param(config[profile_type]["profile"], config[profile_type]["default_profile2"])
+ self.set_bool_param(config[profile_type]["param"], True)
+
+
+ self.disable_all_params()
+ self.spin_for_time(wait_time=0.2)
+ self.set_string_param(config[profile_type]["profile"], profile)
+ self.set_string_param(config[profile_type]["format"], format)
+ self.set_bool_param(config[profile_type]["param"], True)
+ try:
+ ret = self.run_test(themes)
+ assert ret[0], ret[1]
+ assert self.process_data(themes), " ".join(key) + " failed"
+ num_passed += 1
+ except Exception as e:
+ print("Test failed")
+ print(e)
+ num_failed += 1
+ failed_tests.append(" ".join(key))
+ debug_print("Depth tests completed")
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
+ print("Tests passed " + str(num_passed))
+ print("Tests skipped " + str(len(skipped_tests)))
+ if len(skipped_tests):
+ debug_print("\nSkipped tests:" + params['device_type'])
+ debug_print("\n".join(skipped_tests))
+ print("Tests failed " + str(num_failed))
+ if len(failed_tests):
+ print("\nFailed tests:" + params['device_type'])
+ print("\n".join(failed_tests))
+
+ def disable_all_params(self):
+ self.set_bool_param('enable_color', False)
+ self.set_bool_param('enable_depth', False)
+ self.set_bool_param('enable_infra', False)
+ self.set_bool_param('enable_infra1', False)
+ self.set_bool_param('enable_infra2', False)
+
diff --git a/realsense2_camera/test/live_camera/test_camera_fps_tests.py b/realsense2_camera/test/live_camera/test_camera_fps_tests.py
new file mode 100644
index 0000000000..c4b91354d4
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_camera_fps_tests.py
@@ -0,0 +1,128 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+import pytest_live_camera_utils
+from pytest_rs_utils import launch_descr_with_parameters
+
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+
+test_params_test_fps_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+
+ }
+test_params_test_fps_d415 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ }
+test_profiles ={
+ 'D455':{
+ 'depth_test_profiles':['640x480x5','640x480x15', '640x480x30', '640x480x90'],
+ 'color_test_profiles':['640x480x5','640x480x15', '640x480x30', '1280x720x30']
+ },
+ 'D415':{
+ 'depth_test_profiles':['640x480x6','640x480x15', '640x480x30', '640x480x90'],
+ 'color_test_profiles':['640x480x6','640x480x15', '640x480x30', '1280x720x30']
+ }
+}
+'''
+The test was implemented to check the fps of Depth and Color frames. The RosTopicHz had to be
+modified to make it work, see py_rs_utils for more details.
+To check the fps, a value 'expected_fps_in_hz' has to be added to the corresponding theme
+'''
+@pytest.mark.parametrize("launch_descr_with_parameters", [
+ pytest.param(test_params_test_fps_d455, marks=pytest.mark.d455),
+ pytest.param(test_params_test_fps_d415, marks=pytest.mark.d415),
+ #pytest.param(test_params_test_fps_d435, marks=pytest.mark.d435),
+ ],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestCamera_TestFPS(pytest_rs_utils.RsTestBaseClass):
+ def test_camera_test_fps(self,launch_descr_with_parameters):
+ params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(params['device_type']) == False:
+ print("Device not found? : " + params['device_type'])
+ return
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ print("Starting camera test...")
+ self.init_test("RsTest"+params['camera_name'])
+ self.wait_for_node(params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(params))
+ #assert self.set_bool_param('enable_color', False)
+
+ themes = [
+ {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':100,
+ }
+ ]
+ profiles = test_profiles[params['camera_name']]['depth_test_profiles']
+ assert self.set_bool_param('enable_color', False)
+ for profile in profiles:
+ print("Testing profile: ", profile)
+ themes[0]['expected_fps_in_hz'] = float(profile.split('x')[2])
+ assert self.set_string_param('depth_module.profile', profile)
+ assert self.set_bool_param('enable_depth', True)
+ self.spin_for_time(0.5)
+ ret = self.run_test(themes, timeout=25.0)
+ assert ret[0], ret[1]
+ assert self.process_data(themes)
+
+ themes = [
+ {'topic':get_node_heirarchy(params)+'/color/image_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':100,
+ }
+ ]
+ assert self.set_bool_param('enable_depth', False)
+ profiles = test_profiles[params['camera_name']]['color_test_profiles']
+ for profile in profiles:
+ print("Testing profile: ", profile)
+ themes[0]['expected_fps_in_hz'] = float(profile.split('x')[2])
+ assert self.set_string_param('rgb_camera.profile', profile)
+ assert self.set_bool_param('enable_color', True)
+ ret = self.run_test(themes, timeout=25.0)
+ assert ret[0], ret[1]
+ assert self.process_data(themes)
+
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
+
+
diff --git a/realsense2_camera/test/live_camera/test_camera_imu_tests.py b/realsense2_camera/test/live_camera/test_camera_imu_tests.py
new file mode 100644
index 0000000000..384d249f47
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_camera_imu_tests.py
@@ -0,0 +1,120 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+from pytest_rs_utils import launch_descr_with_parameters
+from pytest_rs_utils import delayed_launch_descr_with_parameters
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+
+import pytest_live_camera_utils
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+from pytest_live_camera_utils import debug_print
+
+test_params_all_profiles_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'enable_accel':"True",
+ 'enable_gyro':"True",
+ 'unite_imu_method':1,
+ }
+
+@pytest.mark.parametrize("launch_descr_with_parameters", [
+ pytest.param(test_params_all_profiles_d455, marks=pytest.mark.d455)
+ ],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestLiveCamera_TestMotionSensor(pytest_rs_utils.RsTestBaseClass):
+ def test_LiveCamera_check_motion_sensor(self,launch_descr_with_parameters):
+ params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(params['device_type']) == False:
+ print("Device not found? : " + params['device_type'])
+ return
+ themes = [{'topic':get_node_heirarchy(params)+'/imu', 'msg_type':msg_Imu,'expected_data_chunks':1},
+ {'topic':get_node_heirarchy(params)+'/gyro/sample', 'msg_type':msg_Imu,'expected_data_chunks':1},
+ {'topic':get_node_heirarchy(params)+'/accel/sample', 'msg_type':msg_Imu,'expected_data_chunks':1}]
+ IMU_TOPIC = 0
+ GYRO_TOPIC = 1
+ ACCEL_TOPIC = 2
+ if params['unite_imu_method'] == '0':
+ themes[IMU_TOPIC]['expected_data_chunks'] = 0
+ try:
+ #initialize
+ self.init_test("RsTest"+params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(params))
+
+
+ #run with default params and check the data
+ msg = "Test with the default params "
+ print(msg)
+ ret = self.run_test(themes, timeout=50)
+ assert ret[0], msg + str(ret[1])
+ assert self.process_data(themes), msg + " failed"
+
+ msg = "Test with the accel false "
+ self.set_integer_param('unite_imu_method', 0)
+ self.set_bool_param('enable_accel', False)
+ self.set_bool_param('enable_gyro', True)
+ themes[ACCEL_TOPIC]['expected_data_chunks'] = 0
+ themes[IMU_TOPIC]['expected_data_chunks'] = 0
+ print(msg)
+ ret = self.run_test(themes)
+ assert ret[0], msg + str(ret[1])
+ assert self.process_data(themes), msg + " failed"
+
+ msg = "Test with the gyro false "
+ self.set_bool_param('enable_accel', True)
+ self.set_bool_param('enable_gyro', False)
+ themes[IMU_TOPIC]['expected_data_chunks'] = 0
+ themes[ACCEL_TOPIC]['expected_data_chunks'] = 1
+ themes[GYRO_TOPIC]['expected_data_chunks'] = 0
+ print(msg)
+ self.spin_for_time(wait_time=1.0)
+ ret = self.run_test(themes)
+ assert ret[0], msg + str(ret[1])
+ assert self.process_data(themes), msg + " failed"
+
+ msg = "Test with both gyro and accel false "
+ self.set_bool_param('enable_accel', False)
+ self.set_bool_param('enable_gyro', False)
+ self.set_integer_param('unite_imu_method', 1)
+ self.spin_for_time(wait_time=1.0)
+ themes[ACCEL_TOPIC]['expected_data_chunks'] = 0
+ themes[GYRO_TOPIC]['expected_data_chunks'] = 0
+ themes[IMU_TOPIC]['expected_data_chunks'] = 0
+ print(msg)
+ ret = self.run_test(themes, initial_wait_time=1.0)
+ assert ret[0], msg + " failed"
+ assert self.process_data(themes), msg +" failed"
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
diff --git a/realsense2_camera/test/live_camera/test_camera_point_cloud_tests.py b/realsense2_camera/test/live_camera/test_camera_point_cloud_tests.py
new file mode 100644
index 0000000000..14def86ca9
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_camera_point_cloud_tests.py
@@ -0,0 +1,114 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from rclpy.qos import DurabilityPolicy
+from rclpy.qos import HistoryPolicy
+from rclpy.qos import QoSProfile
+import tf2_ros
+
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+from tf2_msgs.msg import TFMessage as msg_TFMessage
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+from pytest_rs_utils import launch_descr_with_parameters
+
+import pytest_live_camera_utils
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+
+'''
+The test was implemented to check the tf_static before and after infra1 was enabled. There shouldn't be any
+related data before enabling.
+'''
+test_params_points_cloud_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'pointcloud.enable': 'true'
+ }
+test_params_points_cloud_d435 = {
+ 'camera_name': 'D435',
+ 'device_type': 'D435',
+ 'pointcloud.enable': 'true'
+ }
+
+test_params_points_cloud_d415 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ 'enable_color':'true',
+ 'enable_depth':'true',
+ 'depth_module.profile':'848x480x30',
+ 'pointcloud.enable': 'true'
+ }
+
+'''
+This test was ported from rs2_test.py
+the command used to run is "python3 realsense2_camera/scripts/rs2_test.py points_cloud_1"
+'''
+@pytest.mark.parametrize("launch_descr_with_parameters", [
+ pytest.param(test_params_points_cloud_d455, marks=pytest.mark.d455),
+ pytest.param(test_params_points_cloud_d435, marks=pytest.mark.d435),
+ pytest.param(test_params_points_cloud_d415, marks=pytest.mark.d415),
+ ],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestCamera_TestPointCloud(pytest_rs_utils.RsTestBaseClass):
+ def test_camera_test_point_cloud(self,launch_descr_with_parameters):
+ self.params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(self.params['device_type']) == False:
+ print("Device not found? : " + self.params['device_type'])
+ return
+ themes = [
+ {
+ 'topic':get_node_heirarchy(self.params)+'/depth/color/points',
+ 'msg_type':msg_PointCloud2,
+ 'expected_data_chunks':5,
+ },
+ ]
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ self.init_test("RsTest"+self.params['camera_name'])
+ self.wait_for_node(self.params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(self.params))
+ ret = self.run_test(themes, timeout=10)
+ assert ret[0], ret[1]
+ ret = self.process_data(themes, False)
+ assert ret[0], ret[1]
+ finally:
+ self.shutdown()
+ def process_data(self, themes, enable_infra1):
+ for count in range(self.node.get_num_chunks(get_node_heirarchy(self.params)+'/depth/color/points')):
+ data = self.node.pop_first_chunk(get_node_heirarchy(self.params)+'/depth/color/points')
+ print(data)#the frame counter starts with zero, jumps to 2 and continues. To be checked
+ return True,""
+
diff --git a/realsense2_camera/test/live_camera/test_camera_tf_tests.py b/realsense2_camera/test/live_camera/test_camera_tf_tests.py
new file mode 100644
index 0000000000..a31f74e77c
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_camera_tf_tests.py
@@ -0,0 +1,219 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from rclpy.qos import DurabilityPolicy
+from rclpy.qos import HistoryPolicy
+from rclpy.qos import QoSProfile
+import tf2_ros
+
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+from tf2_msgs.msg import TFMessage as msg_TFMessage
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+from pytest_rs_utils import launch_descr_with_parameters
+
+import pytest_live_camera_utils
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+
+'''
+The test was implemented to check the tf_static before and after infra1 was enabled. There shouldn't be any
+related data before enabling.
+'''
+test_params_tf_static_change_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'enable_infra1': 'false',
+ 'enable_infra2': 'true',
+ 'enable_accel': 'true',
+ 'enable_gyro': 'true',
+ }
+test_params_tf_static_change_d435 = {
+ 'camera_name': 'D435',
+ 'device_type': 'D435',
+ 'enable_infra1': 'false',
+ 'enable_infra2': 'true',
+ 'enable_accel': 'true',
+ 'enable_gyro': 'true',
+ }
+
+test_params_tf_static_change_d415 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ 'enable_infra1': 'false',
+ 'enable_infra2': 'true',
+ 'enable_accel': 'true',
+ 'enable_gyro': 'true',
+ }
+@pytest.mark.parametrize("launch_descr_with_parameters", [
+ pytest.param(test_params_tf_static_change_d455, marks=pytest.mark.d455),
+ pytest.param(test_params_tf_static_change_d435, marks=pytest.mark.d435),
+ pytest.param(test_params_tf_static_change_d415, marks=pytest.mark.d415),
+ ],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestCamera_TestTF_Static_change(pytest_rs_utils.RsTestBaseClass):
+ def test_camera_test_tf_static_change(self,launch_descr_with_parameters):
+ self.params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(self.params['device_type']) == False:
+ print("Device not found? : " + self.params['device_type'])
+ return
+ themes = [
+ {'topic':'/tf_static',
+ 'msg_type':msg_TFMessage,
+ 'expected_data_chunks':1,
+ 'qos' : QoSProfile(depth=100,durability=DurabilityPolicy.TRANSIENT_LOCAL,history=HistoryPolicy.KEEP_LAST,)#static
+ }
+ ]
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ self.init_test("RsTest"+self.params['camera_name'])
+ self.wait_for_node(self.params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(self.params))
+ ret = self.run_test(themes, timeout=10)
+ assert ret[0], ret[1]
+
+ ret = self.process_data(themes, False)
+ assert ret[0], ret[1]
+ assert self.set_bool_param('enable_infra1', True)
+
+ ret = self.run_test(themes, timeout=10)
+ assert ret[0], ret[1]
+ ret = self.process_data(themes, True)
+ assert ret[0], ret[1]
+ finally:
+ self.shutdown()
+ def process_data(self, themes, enable_infra1):
+ frame_ids = [self.params['camera_name']+'_link',
+ self.params['camera_name']+'_depth_frame',
+ self.params['camera_name']+'_infra2_frame',
+ self.params['camera_name']+'_color_frame']
+ if self.params['device_type'] == 'D455':
+ frame_ids.append(self.params['camera_name']+'_gyro_frame')
+ frame_ids.append(self.params['camera_name']+'_accel_frame')
+ frame_ids.append(self.params['camera_name']+'_infra1_frame')
+ data = self.node.pop_first_chunk('/tf_static')
+ coupled_frame_ids = [xx for xx in itertools.combinations(frame_ids, 2)]
+ res = self.get_transform_data(data, coupled_frame_ids, is_static=True)
+ for couple in coupled_frame_ids:
+ if self.params['camera_name']+'_infra1_frame' in couple:
+ if enable_infra1 == True and res[couple] != None:
+ continue
+ if enable_infra1 == False and res[couple] == None:
+ continue
+ return False, str(couple) + ": tf_data not as expected"
+ if res[couple] == None:
+ return False, str(couple) + ": didn't get any tf data"
+ return True,""
+
+
+test_params_tf_d455 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'publish_tf': 'true',
+ 'tf_publish_rate': '1.1',
+ }
+
+test_params_tf_d435 = {
+ 'camera_name': 'D435',
+ 'device_type': 'D435',
+ 'publish_tf': 'true',
+ 'tf_publish_rate': '1.1',
+ }
+
+test_params_tf_d415 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ 'publish_tf': 'true',
+ 'tf_publish_rate': '1.1',
+ }
+@pytest.mark.parametrize("launch_descr_with_parameters", [
+ pytest.param(test_params_tf_d455, marks=pytest.mark.d455),
+ pytest.param(test_params_tf_d435, marks=pytest.mark.d435),
+ pytest.param(test_params_tf_d415, marks=pytest.mark.d415),
+ ],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestCamera_TestTF_DYN(pytest_rs_utils.RsTestBaseClass):
+ def test_camera_test_tf_dyn(self,launch_descr_with_parameters):
+ self.params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(self.params['device_type']) == False:
+ print("Device not found? : " + self.params['device_type'])
+ return
+ themes = [
+ {'topic':'/tf',
+ 'msg_type':msg_TFMessage,
+ 'expected_data_chunks':3,
+ 'qos' : QoSProfile(depth=100,durability=DurabilityPolicy.VOLATILE,history=HistoryPolicy.KEEP_LAST,)
+ #'qos' : QoSProfile(depth=100,durability=DurabilityPolicy.TRANSIENT_LOCAL,history=HistoryPolicy.KEEP_LAST,)#static
+ },
+ {'topic':'/tf_static',
+ 'msg_type':msg_TFMessage,
+ 'expected_data_chunks':1,
+ #'qos' : QoSProfile(depth=100,durability=DurabilityPolicy.VOLATILE,history=HistoryPolicy.KEEP_LAST,)
+ 'qos' : QoSProfile(depth=100,durability=DurabilityPolicy.TRANSIENT_LOCAL,history=HistoryPolicy.KEEP_LAST,)#static
+ }
+ ]
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ self.init_test("RsTest"+self.params['camera_name'])
+ self.wait_for_node(self.params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(self.params))
+ ret = self.run_test(themes, timeout=10)
+ assert ret[0], ret[1]
+ ret = self.process_data(themes, False)
+ assert ret[0], ret[1]
+ finally:
+ self.shutdown()
+
+ def process_data(self, themes, enable_infra1):
+ frame_ids = [self.params['camera_name']+'_link',
+ self.params['camera_name']+'_depth_frame',
+ self.params['camera_name']+'_color_frame']
+ data = self.node.pop_first_chunk('/tf_static')
+ ret = self.check_transform_data(data, frame_ids, True)
+ assert ret[0], ret[1]
+ data = self.node.pop_first_chunk('/tf')
+ ret = self.check_transform_data(data, frame_ids)
+ assert ret[0], ret[1]
+ data = self.node.pop_first_chunk('/tf')
+ ret = self.check_transform_data(data, frame_ids)
+ assert ret[0], ret[1]
+ data = self.node.pop_first_chunk('/tf')
+ ret = self.check_transform_data(data, frame_ids)
+ assert ret[0], ret[1]
+ #return True, ""
+
+ return True, ""
diff --git a/realsense2_camera/test/live_camera/test_d415_basic_tests.py b/realsense2_camera/test/live_camera/test_d415_basic_tests.py
new file mode 100644
index 0000000000..649f84e7bd
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_d415_basic_tests.py
@@ -0,0 +1,155 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+from pytest_rs_utils import launch_descr_with_parameters
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+import pytest_live_camera_utils
+
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+
+test_params_depth_avg_1 = {
+ 'camera_name': 'D415',
+ 'device_type': 'D415',
+ }
+'''
+This test was implemented as a template to set the parameters and run the test.
+This directory is not added to the CMakeLists.txt so as to avoid the colcon failure in the
+machines that don't have the D415 connected.
+1. Only a subset of parameter types are implemented in py_rs_utils, it has to be extended for others
+2. After setting the param, rclpy.spin_once may be needed.Test passes even without this though.
+'''
+@pytest.mark.d415
+@pytest.mark.parametrize("launch_descr_with_parameters", [test_params_depth_avg_1],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestD415_Change_Resolution(pytest_rs_utils.RsTestBaseClass):
+ def test_D415_Change_Resolution(self,launch_descr_with_parameters):
+ params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(params['device_type']) == False:
+ print("Device not found? : " + params['device_type'])
+ return
+ failed_tests = []
+ num_passed = 0
+
+ num_failed = 0
+ themes = [{'topic':get_node_heirarchy(params)+'/color/image_raw', 'msg_type':msg_Image,'expected_data_chunks':1}]
+ config = pytest_live_camera_utils.get_profile_config(get_node_heirarchy(params))
+ config["Color"]["default_profile1"] = "640x480x6"
+ config["Color"]["default_profile2"] = "1280x720x6"
+ config["Depth"]["default_profile1"] = "640x480x6"
+ config["Depth"]["default_profile2"] = "1280x720x6"
+ config["Infrared"]["default_profile1"] = "640x480x6"
+ config["Infrared"]["default_profile2"] = "1280x720x6"
+ config["Infrared1"]["default_profile1"] = "640x480x6"
+ config["Infrared1"]["default_profile2"] = "1280x720x6"
+ config["Infrared2"]["default_profile1"] = "640x480x6"
+ config["Infrared2"]["default_profile2"] = "1280x720x6"
+
+ cap = [
+ #['Infrared1', '1920x1080x25', 'Y8'],
+ #['Infrared1', '1920x1080x15', 'Y16'],
+ ['Infrared', '848x100x100', 'BGRA8'],
+ ['Infrared', '848x480x60', 'RGBA8'],
+ ['Infrared', '640x480x60', 'RGBA8'],
+ ['Infrared', '640x480x60', 'BGR8'],
+ ['Infrared', '640x360x60', 'BGRA8'],
+ ['Infrared', '640x360x60', 'BGR8'],
+ ['Infrared', '640x360x30', 'UYVY'],
+ ['Infrared', '480x270x60', 'BGRA8'],
+ ['Infrared', '480x270x60', 'RGB8'],
+ ['Infrared', '424x240x60', 'BGRA8'],
+
+ ]
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ self.init_test("RsTest"+params['camera_name'])
+ self.spin_for_time(wait_time=1.0)
+ self.create_param_ifs(get_node_heirarchy(params))
+
+ for key in cap:
+ profile_type = key[0]
+ profile = key[1]
+ format = key[2]
+ print("Testing " + " ".join(key))
+ themes[0]['topic'] = config[profile_type]['topic']
+ themes[0]['width'] = int(profile.split('x')[0])
+ themes[0]['height'] = int(profile.split('x')[1])
+ #'''
+ if themes[0]['width'] == int(config[profile_type]["default_profile2"].split('x')[0]):
+ self.set_string_param(config[profile_type]["profile"], config[profile_type]["default_profile1"])
+ else:
+ self.set_string_param(config[profile_type]["profile"], config[profile_type]["default_profile2"])
+ self.set_bool_param(config[profile_type]["param"], True)
+ self.disable_all_params()
+ #self.set_string_param("depth_profile", "640x480x6")
+ #self.set_bool_param("enable_depth", True)
+ #'''
+ self.spin_for_time(wait_time=0.2)
+ self.set_string_param(config[profile_type]["profile"], profile)
+ self.set_string_param(config[profile_type]["format"], format)
+ self.set_bool_param(config[profile_type]["param"], True)
+ try:
+ ret = self.run_test(themes, timeout=5.0)
+ assert ret[0], ret[1]
+ assert self.process_data(themes), " ".join(key) + " failed"
+ num_passed += 1
+ except Exception as e:
+ exc_type, exc_obj, exc_tb = sys.exc_info()
+ fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
+ print("Test failed")
+ print(e)
+ print(exc_type, fname, exc_tb.tb_lineno)
+ num_failed += 1
+ failed_tests.append(" ".join(key))
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
+ if num_failed != 0:
+ print("Failed tests:")
+ print("\n".join(failed_tests))
+ assert False, " Tests failed"
+
+
+ def disable_all_params(self):
+ '''
+ self.set_bool_param('enable_color', False)
+ self.set_bool_param('enable_depth', False)
+ self.set_bool_param('enable_infra', False)
+ self.set_bool_param('enable_infra1', False)
+ self.set_bool_param('enable_infra2', False)
+ '''
+ pass
+
diff --git a/realsense2_camera/test/live_camera/test_d455_basic_tests.py b/realsense2_camera/test/live_camera/test_d455_basic_tests.py
new file mode 100644
index 0000000000..fb2f58cab6
--- /dev/null
+++ b/realsense2_camera/test/live_camera/test_d455_basic_tests.py
@@ -0,0 +1,152 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import os
+import sys
+import itertools
+
+
+import pytest
+import rclpy
+
+from sensor_msgs.msg import Image as msg_Image
+from sensor_msgs.msg import Imu as msg_Imu
+from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+
+import numpy as np
+
+sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../utils"))
+import pytest_rs_utils
+import pytest_live_camera_utils
+from pytest_rs_utils import launch_descr_with_parameters
+
+from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
+
+from rclpy.parameter import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+
+test_params_depth_avg_1 = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ }
+'''
+This test was implemented as a template to set the parameters and run the test.
+This directory is not added to the CMakeLists.txt so as to avoid the colcon failure in the
+machines that don't have the D455 connected.
+1. Only a subset of parameter types are implemented in py_rs_utils, it has to be extended for others
+2. After setting the param, rclpy.spin_once may be needed.Test passes even without this though.
+'''
+@pytest.mark.d455
+@pytest.mark.parametrize("launch_descr_with_parameters", [test_params_depth_avg_1],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class TestD455_Change_Resolution(pytest_rs_utils.RsTestBaseClass):
+ def test_D455_Change_Resolution(self,launch_descr_with_parameters):
+ params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(params['device_type']) == False:
+ print("Device not found? : " + params['device_type'])
+ return
+
+ themes = [
+ {'topic':get_node_heirarchy(params)+'/color/image_raw',
+ 'msg_type':msg_Image,
+ 'expected_data_chunks':1,
+ 'width':640,
+ 'height':480,
+ #'data':data
+ }
+ ]
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ print("Starting camera test...")
+ self.init_test("RsTest"+params['camera_name'])
+ self.wait_for_node(params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(params))
+ #assert self.set_bool_param('enable_color', False)
+ assert self.set_string_param('rgb_camera.profile', '640x480x30')
+ assert self.set_bool_param('enable_color', True)
+ ret = self.run_test(themes)
+ assert ret[0], ret[1]
+ assert self.process_data(themes)
+ self.set_string_param('rgb_camera.profile', '1280x800x5')
+ self.set_bool_param('enable_color', True)
+ themes[0]['width'] = 1280
+ themes[0]['height'] = 800
+
+ ret = self.run_test(themes)
+ assert ret[0], ret[1]
+ assert self.process_data(themes)
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
+
+
+test_params_seq_id_update = {
+ 'camera_name': 'D455',
+ 'device_type': 'D455',
+ 'exposure_1' : 2000,
+ 'gain_1' : 20,
+ 'exposure_2' : 3000,
+ 'gain_2' : 30,
+ }
+'''
+This test sets the sequence ID param and validates the corresponding Exposure & Gain values.
+'''
+@pytest.mark.d455
+@pytest.mark.parametrize("launch_descr_with_parameters", [test_params_seq_id_update],indirect=True)
+@pytest.mark.launch(fixture=launch_descr_with_parameters)
+class Test_D455_Seq_ID_Update(pytest_rs_utils.RsTestBaseClass):
+ def test_D455_Seq_ID_update(self,launch_descr_with_parameters):
+ params = launch_descr_with_parameters[1]
+ if pytest_live_camera_utils.check_if_camera_connected(params['device_type']) == False:
+ print("Device not found? : " + params['device_type'])
+ return
+
+ try:
+ '''
+ initialize, run and check the data
+ '''
+ print("Starting camera test...")
+ self.init_test("RsTest"+params['camera_name'])
+ self.wait_for_node(params['camera_name'])
+ self.create_param_ifs(get_node_heirarchy(params))
+
+ assert self.set_bool_param('depth_module.hdr_enabled', False)
+
+ assert self.set_integer_param('depth_module.sequence_id', 1)
+ assert self.set_integer_param('depth_module.exposure', params['exposure_1'])
+ assert self.set_integer_param('depth_module.gain', params['gain_1'])
+
+ assert self.set_integer_param('depth_module.sequence_id', 2)
+ assert self.set_integer_param('depth_module.exposure', params['exposure_2'])
+ assert self.set_integer_param('depth_module.gain', params['gain_2'])
+
+ assert self.set_integer_param('depth_module.sequence_id', 1)
+ assert self.get_integer_param('depth_module.exposure') == params['exposure_1']
+ assert self.get_integer_param('depth_module.gain') == params['gain_1']
+
+ assert self.set_integer_param('depth_module.sequence_id', 2)
+ assert self.get_integer_param('depth_module.exposure') == params['exposure_2']
+ assert self.get_integer_param('depth_module.gain') == params['gain_2']
+
+ finally:
+ #this step is important because the test will fail next time
+ pytest_rs_utils.kill_realsense2_camera_node()
+ self.shutdown()
+
diff --git a/realsense2_camera/test/post_processing_filters/test_align_depth.py b/realsense2_camera/test/post_processing_filters/test_align_depth.py
index 2a5b4b1017..8ac3c23e2b 100644
--- a/realsense2_camera/test/post_processing_filters/test_align_depth.py
+++ b/realsense2_camera/test/post_processing_filters/test_align_depth.py
@@ -32,7 +32,7 @@
import pytest_rs_utils
from pytest_rs_utils import launch_descr_with_yaml
from pytest_rs_utils import get_rosbag_file_path
-
+from pytest_rs_utils import get_node_heirarchy
'''
This test imitates the ros2 launch rs_launch.py realsense2_camera with the given parameters below
@@ -66,11 +66,11 @@ class TestBasicAlignDepthEnable(pytest_rs_utils.RsTestBaseClass):
def test_align_depth_on(self, launch_descr_with_yaml):
params = launch_descr_with_yaml[1]
themes = [
- {'topic': '/'+params['camera_name']+'/color/image_raw', 'msg_type': msg_Image,
+ {'topic': get_node_heirarchy(params)+'/color/image_raw', 'msg_type': msg_Image,
'expected_data_chunks': 1, 'width': 640, 'height': 480},
- {'topic': '/'+params['camera_name']+'/depth/image_rect_raw', 'msg_type': msg_Image,
+ {'topic': get_node_heirarchy(params)+'/depth/image_rect_raw', 'msg_type': msg_Image,
'expected_data_chunks': 1, 'width': 1280, 'height': 720},
- {'topic': '/'+params['camera_name']+'/aligned_depth_to_color/image_raw', 'msg_type': msg_Image,
+ {'topic': get_node_heirarchy(params)+'/aligned_depth_to_color/image_raw', 'msg_type': msg_Image,
'expected_data_chunks': 1, 'width': 640, 'height': 480}
]
try:
diff --git a/realsense2_camera/test/rosbag/test_rosbag_all_topics_test.py b/realsense2_camera/test/rosbag/test_rosbag_all_topics_test.py
index 98a63008cc..9cda836f3a 100644
--- a/realsense2_camera/test/rosbag/test_rosbag_all_topics_test.py
+++ b/realsense2_camera/test/rosbag/test_rosbag_all_topics_test.py
@@ -39,25 +39,27 @@
from pytest_rs_utils import delayed_launch_descr_with_parameters
from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
test_params_all_topics = {"rosbag_filename":get_rosbag_file_path("outdoors_1color.bag"),
'camera_name': 'AllTopics',
- 'color_width': '0',
- 'color_height': '0',
- 'depth_width': '0',
- 'depth_height': '0',
- 'infra_width': '0',
- 'infra_height': '0',
'enable_infra1':'true',
'enable_infra2':'true',
- #'align_depth.enable':'true',
+ 'align_depth.enable':'true',
}
'''
To test all topics published
'''
-@pytest.mark.skip
+'''
+To test all topics published
+
+The test doesn't work in CI due to sync. The unlike the live camera, rosbag node publishes the extrinsics only once,
+not every time the subscription is created. The CI due to limited resource can start the ros node much earlier than
+the testcase, hence missing the published data by the test
+'''
@pytest.mark.rosbag
+@pytest.mark.skipif (os.getenv('RS_ROS_REGRESSION', "not found") == "not found",reason="The test doesn't work in CI")
@pytest.mark.parametrize("delayed_launch_descr_with_parameters", [test_params_all_topics],indirect=True)
@pytest.mark.launch(fixture=delayed_launch_descr_with_parameters)
class TestAllTopics(pytest_rs_utils.RsTestBaseClass):
@@ -77,18 +79,18 @@ def test_all_topics(self,delayed_launch_descr_with_parameters):
data = pytest_rs_utils.ImageColorGetData(params["rosbag_filename"])
themes = [
{
- 'topic':'/'+params['camera_name']+'/extrinsics/depth_to_color',
+ 'topic':get_node_heirarchy(params)+'/extrinsics/depth_to_color',
'msg_type':msg_Extrinsics,
'expected_data_chunks':1,
'data':depth_to_color_extrinsics_data
},
{
- 'topic':'/'+params['camera_name']+'/extrinsics/depth_to_infra1',
+ 'topic':get_node_heirarchy(params)+'/extrinsics/depth_to_infra1',
'msg_type':msg_Extrinsics,
'expected_data_chunks':1,
'data':depth_to_infra_extrinsics_data
},
- {'topic':'/'+params['camera_name']+'/color/image_raw',
+ {'topic':get_node_heirarchy(params)+'/color/image_raw',
'msg_type':msg_Image,
'expected_data_chunks':1,
'data':data
@@ -145,19 +147,19 @@ def test_metadata_topics(self,delayed_launch_descr_with_parameters):
themes = [
{
- 'topic':'/'+params['camera_name']+'/color/metadata',
+ 'topic':get_node_heirarchy(params)+'/color/metadata',
'msg_type':msg_Metadata,
'expected_data_chunks':1,
#'data':color_metadata
},
{
- 'topic':'/'+params['camera_name']+'/depth/metadata',
+ 'topic':get_node_heirarchy(params)+'/depth/metadata',
'msg_type':msg_Metadata,
'expected_data_chunks':1,
#'data':depth_metadata
},
{
- 'topic':'/'+params['camera_name']+'/infra1/metadata',
+ 'topic':get_node_heirarchy(params)+'/infra1/metadata',
'msg_type':msg_Metadata,
'expected_data_chunks':1,
#'data':infra1_metadata
@@ -260,19 +262,19 @@ def test_camera_info_topics(self,delayed_launch_descr_with_parameters):
themes = [
{
- 'topic':'/'+params['camera_name']+'/color/camera_info',
+ 'topic':get_node_heirarchy(params)+'/color/camera_info',
'msg_type':CameraInfo,
'expected_data_chunks':1,
'data':color_data
},
{
- 'topic':'/'+params['camera_name']+'/depth/camera_info',
+ 'topic':get_node_heirarchy(params)+'/depth/camera_info',
'msg_type':CameraInfo,
'expected_data_chunks':1,
'data':depth_data
},
{
- 'topic':'/'+params['camera_name']+'/infra1/camera_info',
+ 'topic':get_node_heirarchy(params)+'/infra1/camera_info',
'msg_type':CameraInfo,
'expected_data_chunks':1,
'data':infra1_data
diff --git a/realsense2_camera/test/rosbag/test_rosbag_basic_tests.py b/realsense2_camera/test/rosbag/test_rosbag_basic_tests.py
index dfcad8f5fc..614936a973 100644
--- a/realsense2_camera/test/rosbag/test_rosbag_basic_tests.py
+++ b/realsense2_camera/test/rosbag/test_rosbag_basic_tests.py
@@ -32,6 +32,7 @@
from pytest_rs_utils import launch_descr_with_parameters
from pytest_rs_utils import delayed_launch_descr_with_parameters
from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
test_params = {"rosbag_filename":get_rosbag_file_path("outdoors_1color.bag"),
'camera_name': 'Vis2_Cam',
@@ -54,7 +55,7 @@ def test_vis_2(self,delayed_launch_descr_with_parameters):
params = delayed_launch_descr_with_parameters[1]
data = pytest_rs_utils.ImageColorGetData(params["rosbag_filename"])
themes = [
- {'topic':'/'+params['camera_name']+'/color/image_raw',
+ {'topic':get_node_heirarchy(params)+'/color/image_raw',
'msg_type':msg_Image,
'expected_data_chunks':1,
'data':data
@@ -96,7 +97,7 @@ def test_depth_w_cloud_1(self,launch_descr_with_parameters):
params = launch_descr_with_parameters[1]
data = pytest_rs_utils.ImageDepthGetData(params["rosbag_filename"])
themes = [
- {'topic':'/'+params['camera_name']+'/depth/image_rect_raw',
+ {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw',
'msg_type':msg_Image,
'expected_data_chunks':1,
'data':data
@@ -137,7 +138,7 @@ def test_depth_avg_1(self,launch_descr_with_parameters):
params = launch_descr_with_parameters[1]
data = pytest_rs_utils.ImageDepthGetData(params["rosbag_filename"])
themes = [
- {'topic':'/'+params['camera_name']+'/depth/image_rect_raw',
+ {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw',
'msg_type':msg_Image,
'expected_data_chunks':1,
'data':data
diff --git a/realsense2_camera/test/rosbag/test_rosbag_dec_point_tests.py b/realsense2_camera/test/rosbag/test_rosbag_dec_point_tests.py
index 39a366f776..688ca6c8d8 100644
--- a/realsense2_camera/test/rosbag/test_rosbag_dec_point_tests.py
+++ b/realsense2_camera/test/rosbag/test_rosbag_dec_point_tests.py
@@ -32,6 +32,7 @@
from pytest_rs_utils import launch_descr_with_parameters
from pytest_rs_utils import delayed_launch_descr_with_parameters
from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
test_params_depth_avg_decimation_1 = {"rosbag_filename":get_rosbag_file_path("outdoors_1color.bag"),
@@ -56,7 +57,7 @@ def test_depth_avg_decimation_1(self,launch_descr_with_parameters):
params = launch_descr_with_parameters[1]
data = pytest_rs_utils.ImageDepthGetData_decimation(params["rosbag_filename"])
themes = [
- {'topic':'/'+params['camera_name']+'/depth/image_rect_raw',
+ {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw',
'msg_type':msg_Image,
'expected_data_chunks':1,
'data':data
@@ -97,7 +98,7 @@ def test_depth_avg_1(self,launch_descr_with_parameters):
params = launch_descr_with_parameters[1]
data = pytest_rs_utils.ImageDepthGetData(params["rosbag_filename"])
themes = [
- {'topic':'/'+params['camera_name']+'/depth/image_rect_raw',
+ {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw',
'msg_type':msg_Image,
'expected_data_chunks':1,
'data':data
@@ -139,7 +140,7 @@ def test_depth_avg_decimation_1(self,launch_descr_with_parameters):
params = launch_descr_with_parameters[1]
data = pytest_rs_utils.ImageDepthGetData_decimation(params["rosbag_filename"])
themes = [
- {'topic':'/'+params['camera_name']+'/depth/image_rect_raw',
+ {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw',
'msg_type':msg_Image,
'expected_data_chunks':1,
'data':data
@@ -181,7 +182,7 @@ def test_points_cloud_1(self,delayed_launch_descr_with_parameters):
params = delayed_launch_descr_with_parameters[1]
self.rosbag = params["rosbag_filename"]
themes = [
- {'topic':'/'+params['camera_name']+'/depth/color/points',
+ {'topic':get_node_heirarchy(params)+'/depth/color/points',
'msg_type':msg_PointCloud2,
'expected_data_chunks':1,
#'data':data
diff --git a/realsense2_camera/test/rosbag/test_rosbag_depth_tests.py b/realsense2_camera/test/rosbag/test_rosbag_depth_tests.py
index 80b9c334d8..51d574da85 100644
--- a/realsense2_camera/test/rosbag/test_rosbag_depth_tests.py
+++ b/realsense2_camera/test/rosbag/test_rosbag_depth_tests.py
@@ -20,10 +20,14 @@
import pytest
import rclpy
+from rclpy.qos import DurabilityPolicy
+from rclpy.qos import HistoryPolicy
+from rclpy.qos import QoSProfile
from sensor_msgs.msg import Image as msg_Image
from sensor_msgs.msg import Imu as msg_Imu
from sensor_msgs.msg import PointCloud2 as msg_PointCloud2
+from tf2_msgs.msg import TFMessage as msg_TFMessage
import numpy as np
@@ -32,6 +36,7 @@
from pytest_rs_utils import launch_descr_with_parameters
from pytest_rs_utils import delayed_launch_descr_with_parameters
from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
test_params_depth_points_cloud_1 = {"rosbag_filename":get_rosbag_file_path("outdoors_1color.bag"),
@@ -72,17 +77,11 @@ def test_depth_points_cloud_1(self,delayed_launch_descr_with_parameters):
'avg': [np.array([ 1.28251814, -0.15839984, 4.82235184, 80, 160, 240])],
'epsilon': [0.04, 5]}
themes = [
- {'topic':'/'+params['camera_name']+'/depth/color/points',
+ {'topic':get_node_heirarchy(params)+'/depth/color/points',
'msg_type':msg_PointCloud2,
'expected_data_chunks':1,
'data':data1
},
- {'topic':'/'+params['camera_name']+'/depth/image_rect_raw',
- 'msg_type':msg_Image,
- 'expected_data_chunks':1,
- 'data':data2
- }
-
]
try:
'''
@@ -99,12 +98,10 @@ def process_data(self, themes):
test_params_static_tf_1 = {"rosbag_filename":get_rosbag_file_path("outdoors_1color.bag"),
- 'camera_name': 'Static_tf_1',
+ 'camera_name': 'Static_tf1',
'color_width': '0',
'color_height': '0',
- 'depth_width': '0',
- 'depth_height': '0',
- 'infra_width': '0',
+ "static_tf":True,
'infra_height': '0',
'enable_infra1':'true',
'enable_infra2':'true'
@@ -118,43 +115,45 @@ def process_data(self, themes):
@pytest.mark.launch(fixture=delayed_launch_descr_with_parameters)
class TestStaticTf1(pytest_rs_utils.RsTestBaseClass):
def test_static_tf_1(self,delayed_launch_descr_with_parameters):
- params = delayed_launch_descr_with_parameters[1]
- self.rosbag = params["rosbag_filename"]
- data = {('camera_link', 'camera_color_frame'): ([-0.00010158783697988838, 0.014841210097074509, -0.00022671300393994898], [-0.0008337442995980382, 0.0010442184284329414, -0.0009920650627464056, 0.9999986290931702]),
- ('camera_link', 'camera_depth_frame'): ([0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]),
- ('camera_link', 'camera_infra1_frame'): ([0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]),
- ('camera_depth_frame', 'camera_infra1_frame'): ([0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]),
- ('camera_depth_frame', 'camera_color_frame'): ([-0.00010158783697988838, 0.014841210097074509, -0.00022671300393994898], [-0.0008337442995980382, 0.0010442184284329414, -0.0009920650627464056, 0.9999986290931702]),
- ('camera_infra1_frame', 'camera_color_frame'): ([-0.00010158783697988838, 0.014841210097074509, -0.00022671300393994898], [-0.0008337442995980382, 0.0010442184284329414, -0.0009920650627464056, 0.9999986290931702])}
+ self.params = delayed_launch_descr_with_parameters[1]
+ self.rosbag = self.params["rosbag_filename"]
themes = [
- {'topic':'/'+params['camera_name']+'/color/image_raw',
+ {'topic':get_node_heirarchy(self.params)+'/color/image_raw',
'msg_type':msg_Image,
'expected_data_chunks':1,
- 'data':data,
+ 'static_tf':True,
+ },
+ {'topic':'/tf_static',
+ 'msg_type':msg_TFMessage,
+ 'expected_data_chunks':1,
+ 'qos' : QoSProfile(depth=100,durability=DurabilityPolicy.TRANSIENT_LOCAL,history=HistoryPolicy.KEEP_LAST,)#static
}
]
try:
'''
initialize, run and check the data
'''
- self.init_test("RsTest"+params['camera_name'])
+ self.init_test("RsTest"+self.params['camera_name'])
ret = self.run_test(themes)
assert ret[0], ret[1]
assert self.process_data(themes)
finally:
self.shutdown()
def process_data(self, themes):
- #print ('Gathering static transforms')
- frame_ids = ['camera_link', 'camera_depth_frame', 'camera_infra1_frame', 'camera_infra2_frame', 'camera_color_frame', 'camera_fisheye_frame', 'camera_pose']
+ expected_data = {(self.params['camera_name']+'_link', self.params['camera_name']+'_color_frame'): ([-0.00010158783697988838, 0.014841210097074509, -0.00022671300393994898], [-0.0008337442995980382, 0.0010442184284329414, -0.0009920650627464056, 0.9999986290931702]),
+ (self.params['camera_name']+'_link', self.params['camera_name']+'_depth_frame'): ([0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]),
+ (self.params['camera_name']+'_link', self.params['camera_name']+'_infra1_frame'): ([0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]),
+ (self.params['camera_name']+'_depth_frame', self.params['camera_name']+'_infra1_frame'): ([0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]),
+ (self.params['camera_name']+'_depth_frame', self.params['camera_name']+'_color_frame'): ([-0.00010158783697988838, 0.014841210097074509, -0.00022671300393994898], [-0.0008337442995980382, 0.0010442184284329414, -0.0009920650627464056, 0.9999986290931702]),
+ (self.params['camera_name']+'_infra1_frame', self.params['camera_name']+'_color_frame'): ([-0.00010158783697988838, 0.014841210097074509, -0.00022671300393994898], [-0.0008337442995980382, 0.0010442184284329414, -0.0009920650627464056, 0.9999986290931702])}
+ frame_ids = [self.params['camera_name']+'_link', self.params['camera_name']+'_depth_frame', self.params['camera_name']+'_infra1_frame', self.params['camera_name']+'_infra2_frame', self.params['camera_name']+'_color_frame', self.params['camera_name']+'_fisheye_frame', self.params['camera_name']+'_pose']
+ coupled_frame_ids = [xx for xx in itertools.combinations(frame_ids, 2)]
+ data = self.node.pop_first_chunk('/tf_static')
coupled_frame_ids = [xx for xx in itertools.combinations(frame_ids, 2)]
- res = {}
- for couple in coupled_frame_ids:
- from_id, to_id = couple
- if (self.node.tfBuffer.can_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=3e6))):
- res[couple] = self.node.tfBuffer.lookup_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=1e6)).transform
- else:
- res[couple] = None
- return pytest_rs_utils.staticTFTest(res, themes[0]["data"])
+ tfs_data = self.get_transform_data(data, coupled_frame_ids, is_static=True)
+ ret = pytest_rs_utils.staticTFTest(tfs_data, expected_data)
+ assert ret[0], ret[1]
+ return ret[0]
test_params_non_existing_rosbag = {"rosbag_filename":"non_existent.bag",
@@ -204,7 +203,7 @@ def test_align_depth_color_1(self,delayed_launch_descr_with_parameters):
params = delayed_launch_descr_with_parameters[1]
data = pytest_rs_utils.ImageDepthInColorShapeGetData(params["rosbag_filename"])
themes = [
- {'topic':'/'+params['camera_name']+'/aligned_depth_to_color/image_raw',
+ {'topic':get_node_heirarchy(params)+'/aligned_depth_to_color/image_raw',
'msg_type':msg_Image,
'expected_data_chunks':1,
'data':data
@@ -260,7 +259,7 @@ def test_align_depth_infra_1(self,delayed_launch_descr_with_parameters):
self.rosbag = params["rosbag_filename"]
#data = pytest_rs_utils.ImageDepthInColorShapeGetData(params["rosbag_filename"])
themes = [
- {'topic':'/'+params['camera_name']+'/aligned_depth_to_infra1/image_raw',
+ {'topic':get_node_heirarchy(params)+'/aligned_depth_to_infra1/image_raw',
'msg_type':msg_Image,
'expected_data_chunks':1,
#'data':data
diff --git a/realsense2_camera/test/rosbag/test_rosbag_imu_test.py b/realsense2_camera/test/rosbag/test_rosbag_imu_test.py
index 3d74f5e0e5..4beb7fe5a7 100644
--- a/realsense2_camera/test/rosbag/test_rosbag_imu_test.py
+++ b/realsense2_camera/test/rosbag/test_rosbag_imu_test.py
@@ -38,6 +38,7 @@
from pytest_rs_utils import delayed_launch_descr_with_parameters
from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
test_params_accel = {"rosbag_filename":get_rosbag_file_path("D435i_Depth_and_IMU_Stands_still.bag"),
@@ -63,7 +64,7 @@ def test_accel_up_1(self,delayed_launch_descr_with_parameters):
params = delayed_launch_descr_with_parameters[1]
data = pytest_rs_utils.AccelGetDataDeviceStandStraight(params["rosbag_filename"])
themes = [
- {'topic':'/'+params['camera_name']+'/accel/sample',
+ {'topic':get_node_heirarchy(params)+'/accel/sample',
'msg_type':msg_Imu,
'expected_data_chunks':1,
'data':data
@@ -107,19 +108,19 @@ def test_imu_topics(self,delayed_launch_descr_with_parameters):
params = delayed_launch_descr_with_parameters[1]
self.rosbag = params["rosbag_filename"]
themes = [{
- 'topic':'/'+params['camera_name']+'/imu',
+ 'topic':get_node_heirarchy(params)+'/imu',
'msg_type':msg_Imu,
'expected_data_chunks':1,
#'data':depth_to_color_data
},
{
- 'topic':'/'+params['camera_name']+'/gyro/sample',
+ 'topic':get_node_heirarchy(params)+'/gyro/sample',
'msg_type':msg_Imu,
'expected_data_chunks':1,
#'data':depth_to_color_data
},
{
- 'topic':'/'+params['camera_name']+'/accel/sample',
+ 'topic':get_node_heirarchy(params)+'/accel/sample',
'msg_type':msg_Imu,
'expected_data_chunks':1,
#'data':depth_to_color_data
diff --git a/realsense2_camera/test/templates/test_integration_template.py b/realsense2_camera/test/templates/test_integration_template.py
index 3a40959608..73eb42e3b6 100644
--- a/realsense2_camera/test/templates/test_integration_template.py
+++ b/realsense2_camera/test/templates/test_integration_template.py
@@ -32,6 +32,7 @@
import pytest_rs_utils
from pytest_rs_utils import launch_descr_with_yaml
from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
'''
This is a testcase simiar to the integration_fn testcase, the only difference is that
@@ -57,7 +58,7 @@ def test_using_function(launch_context,launch_descr_with_yaml):
# by now, the camera would have started
start = time.time()
timeout = 4.0
- camera_name = '/'+params['camera_name']+'/'+params['camera_name']
+ camera_name = get_node_heirarchy(params)+'/'+params['camera_name']
while (time.time() - start) < timeout:
service_list = subprocess.check_output(['ros2', 'node', 'list']).decode("utf-8")
is_node_up = camera_name in service_list
@@ -104,7 +105,7 @@ def test_camera_1(self, launch_descr_with_yaml):
params = launch_descr_with_yaml[1]
themes = [
#{'topic':'/camera/color/image_raw','msg_type':msg_Image,'expected_data_chunks':1},
- {'topic':'/'+params['camera_name']+'/depth/image_rect_raw','msg_type':msg_Image,'expected_data_chunks':1}
+ {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw','msg_type':msg_Image,'expected_data_chunks':1}
]
try:
'''
@@ -134,14 +135,14 @@ class TestCamera2(pytest_rs_utils.RsTestBaseClass):
def test_camera_2(self,launch_descr_with_yaml):
params = launch_descr_with_yaml[1]
themes = [
- {'topic':'/'+params['camera_name']+'/depth/image_rect_raw',
+ {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw',
'msg_type':msg_Image,
'store_raw_data':True,
'expected_data_chunks':1,
'frame_id':params['camera_name']+'_depth_optical_frame',
'height':720,
'width':1280},
- {'topic':'/'+params['camera_name']+'/color/image_raw',
+ {'topic':get_node_heirarchy(params)+'/color/image_raw',
'msg_type':msg_Image,
'store_raw_data':True,
'expected_data_chunks':1,
diff --git a/realsense2_camera/test/templates/test_parameterized_template.py b/realsense2_camera/test/templates/test_parameterized_template.py
index 4875038bea..a316caa1dd 100644
--- a/realsense2_camera/test/templates/test_parameterized_template.py
+++ b/realsense2_camera/test/templates/test_parameterized_template.py
@@ -32,6 +32,7 @@
import pytest_rs_utils
from pytest_rs_utils import launch_descr_with_parameters
from pytest_rs_utils import get_rosbag_file_path
+from pytest_rs_utils import get_node_heirarchy
@@ -61,7 +62,7 @@ class TestCamera2(pytest_rs_utils.RsTestBaseClass):
def test_node_start(self, launch_descr_with_parameters):
params = launch_descr_with_parameters[1]
themes = [
- {'topic':'/'+params['camera_name']+'/depth/image_rect_raw',
+ {'topic':get_node_heirarchy(params)+'/depth/image_rect_raw',
'msg_type':msg_Image,
'store_raw_data':True,
'expected_data_chunks':1,
diff --git a/realsense2_camera/test/utils/pytest_live_camera_utils.py b/realsense2_camera/test/utils/pytest_live_camera_utils.py
new file mode 100644
index 0000000000..9b5bfeac70
--- /dev/null
+++ b/realsense2_camera/test/utils/pytest_live_camera_utils.py
@@ -0,0 +1,207 @@
+# Copyright 2023 Intel Corporation. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+import time
+import ctypes
+import struct
+import requests
+import json
+
+from pytest_rs_utils import debug_print
+
+
+def get_profile_config(camera_name):
+ config = {
+ "Color":{"profile":"rgb_camera.profile", "format":'rgb_camera.color_format', "param":"enable_color", "topic":camera_name+'/color/image_raw',},
+ "Depth":{"profile":"depth_module.profile", "format":'depth_module.depth_format', "param":"enable_depth", 'topic':camera_name+'/depth/image_rect_raw'},
+ "Infrared":{"profile":"depth_module.profile", "format":'depth_module.infra_format', "param":"enable_infra", 'topic':camera_name+'/infra/image_rect_raw'},
+ "Infrared1":{"profile":"depth_module.profile", "format":'depth_module.infra1_format',"param":"enable_infra1", 'topic':camera_name+'/infra/image_rect_raw'},
+ "Infrared2":{"profile":"depth_module.profile", "format":'depth_module.infra2_format',"param":"enable_infra2", 'topic':camera_name+'/infra/image_rect_raw'},
+ }
+ return config
+
+
+def get_default_profiles(cap, profile):
+ profile1 = ""
+ profile2 = ""
+ for profiles in cap:
+ if profiles[0] == profile and int(profiles[1].split('x')[0]) != 640:
+ profile1 = profiles[1]
+ break
+ for profiles in cap:
+ if profiles[0] == profile and int(profiles[1].split('x')[0]) != int(profile1.split('x')[0]):
+ profile2 = profiles[1]
+ break
+ debug_print(profile + " default profile1:" + profile1)
+ debug_print(profile + " default profile2:" + profile2)
+ return profile1,profile2
+
+
+
+device_info_string = "Device info:"
+
+def get_device_info_location(long_data, index=0):
+ for line_no in range(index, len(long_data)):
+ if device_info_string in long_data[line_no]:
+ return line_no
+ return len(long_data)
+
+stream_profile_string = "Stream Profiles supported by"
+def get_stream_profile_location(long_data, start_index, end_index, profile_string):
+ for line_no in range(start_index, end_index):
+ if stream_profile_string in long_data[line_no]:
+ if profile_string in long_data[line_no]:
+ return line_no
+ return None
+
+def get_depth_profiles(long_data, start_index, end_index):
+ cap = []
+ for line_no in range(start_index, end_index):
+ if len(long_data[line_no]) == 0:
+ break
+ debug_print("depth profile processing:" + long_data[line_no])
+ depth_profile = long_data[line_no].split()
+ if len(depth_profile) == 5:
+ profile = depth_profile[0]
+ value = depth_profile[1]+"x"+depth_profile[3]
+ format = depth_profile[4]
+ elif len(depth_profile) == 6:
+ profile = depth_profile[0]+depth_profile[1]
+ value = depth_profile[2]+"x"+depth_profile[4]
+ format = depth_profile[5]
+ else:
+ assert false, "Seems that the depth profile info format printed by rs-enumerate-devices changed"
+ value = value[:-2]
+ debug_print("depth profile added: " + profile, value, format)
+ cap.append([profile, value, format])
+ debug_print(cap)
+ return cap
+
+
+def get_color_profiles(long_data, start_index, end_index):
+ cap = []
+ for line_no in range(start_index, end_index):
+ if len(long_data[line_no]) == 0:
+ break
+ debug_print("color profile processing:" + long_data[line_no])
+ color_profile = long_data[line_no].split()
+ if len(color_profile) == 5:
+ profile = color_profile[0]
+ value = color_profile[1]+"x"+color_profile[3]
+ format = color_profile[4]
+ else:
+ assert false, "Seems that the color profile info format printed by rs-enumerate-devices changed"
+ value = value[:-2]
+ debug_print("color profile added: " + profile, value, format)
+ cap.append([profile, value, format])
+ debug_print(cap)
+ return cap
+
+NAME_LINE_INDEX = 1
+NAME_LINE_NAME_OFFSET = 4
+SERIAL_NO_LINE_INDEX = 2
+SERIAL_NO_VALUE_OFFSET = 3
+def parse_device_info(long_data, start_index, end_index, device_type, serial_no):
+ #after device_info, the next line should have the name and device type
+ capability = {}
+ debug_print("Searching for data between lines ", str(start_index) + " and " + str(end_index))
+ name_line = long_data[start_index+NAME_LINE_INDEX].split()
+ if name_line[0] != "Name":
+ assert False, "rs-enumerate-devices output format changed"
+ if name_line[4] != device_type:
+ debug_print("device not matching:" + name_line[NAME_LINE_NAME_OFFSET])
+ return None
+ debug_print("device matched:" + name_line[NAME_LINE_NAME_OFFSET])
+ if serial_no != None:
+ #next line after nameline should have the serial_no
+ serial_no_line = long_data[start_index+SERIAL_NO_LINE_INDEX].split()
+ if serial_no_line[0] != "Serial":
+ assert False, "rs-enumerate-devices output format changed"
+ if serial_no_line[SERIAL_NO_VALUE_OFFSET] != serial_no:
+ debug_print("serial_no not matching:" + serial_no_line[SERIAL_NO_VALUE_OFFSET])
+ return None
+ debug_print("serial_no matched:" + serial_no_line[SERIAL_NO_VALUE_OFFSET])
+ else:
+ serial_no = long_data[start_index+SERIAL_NO_LINE_INDEX].split()[SERIAL_NO_VALUE_OFFSET]
+
+ capability["device_type"] = device_type
+ capability["serial_no"] = serial_no
+ depth_profile_index = get_stream_profile_location(long_data, start_index, end_index, "Stereo Module")
+ if depth_profile_index != None:
+ capability["depth_profile"] = get_depth_profiles(long_data, depth_profile_index+3, end_index)
+ rgb_profile_index = get_stream_profile_location(long_data, start_index, end_index, "RGB Camera")
+ if rgb_profile_index != None:
+ capability["color_profile"] = get_color_profiles(long_data, rgb_profile_index+3, end_index)
+ return capability
+
+def get_camera_capabilities(device_type, serial_no=None):
+ long_data = os.popen("rs-enumerate-devices").read().splitlines()
+ debug_print(serial_no)
+ index = 0
+ while index < len(long_data):
+ index = get_device_info_location(long_data, index)
+ if index == len(long_data):
+ return
+ else:
+ debug_print("DeviceInfo found at: " + str(index))
+ start_index = index
+ index += 1
+ index = get_device_info_location(long_data, index)
+ capability = parse_device_info(long_data, start_index, index, device_type, serial_no)
+ if capability != None:
+ return capability
+ return None
+
+def get_camera_capabilities_short(device_type, serial_no=None):
+ short_data = os.popen("rs-enumerate-devices -s").read().splitlines()
+ print(serial_no)
+ for line in short_data:
+ print(line)
+ if device_type in line:
+ if serial_no is None or serial_no == "" :
+ print(device_type+ " found in " + line)
+ return
+ if serial_no in line:
+ print(device_type + " with serial_no " + serial_no +" found in " + line)
+ return
+ print(device_type + " not found")
+
+def check_if_camera_connected(device_type, serial_no=None):
+ long_data = os.popen("rs-enumerate-devices -s").read().splitlines()
+ debug_print(serial_no)
+ index = 0
+ for index in range(len(long_data)):
+ name_line = long_data[index].split()
+ if name_line[0] != "Intel":
+ continue
+ if name_line[2] != device_type:
+ continue
+ if serial_no == None:
+ return True
+ if serial_no == name_line[3]:
+ return True
+
+ return False
+
+if __name__ == '__main__':
+ device_type = 'D455'
+ serial_no = None
+ if len(sys.argv) > 1:
+ device_type = sys.argv[1]
+ if len(sys.argv) > 2:
+ serial_no = sys.argv[2]
+ cap = get_camera_capabilities(device_type, serial_no)
+ print("Capabilities:")
+ print(cap)
\ No newline at end of file
diff --git a/realsense2_camera/test/utils/pytest_rs_utils.py b/realsense2_camera/test/utils/pytest_rs_utils.py
index 0fd4563381..0a0069115f 100644
--- a/realsense2_camera/test/utils/pytest_rs_utils.py
+++ b/realsense2_camera/test/utils/pytest_rs_utils.py
@@ -15,8 +15,13 @@
import sys
import time
from collections import deque
+import functools
+import itertools
+import subprocess
+
import pytest
+
import numpy as np
from launch import LaunchDescription
@@ -28,10 +33,31 @@
from rclpy import qos
from rclpy.node import Node
from rclpy.utilities import ok
+from ros2topic.verb.bw import ROSTopicBandwidth
+from ros2topic.verb.hz import ROSTopicHz
+from ros2topic.api import get_msg_class
import ctypes
import struct
import requests
+import math
+
+#from rclpy.parameter import Parameter
+from rcl_interfaces.msg import Parameter
+from rcl_interfaces.msg import ParameterValue
+from rcl_interfaces.srv import SetParameters, GetParameters, ListParameters
+'''
+humble doesn't have the SetParametersResult and SetParameters_Response imported using
+__init__.py. The below two lines can be used for iron and hopefully succeeding ROS2 versions
+'''
+#from rcl_interfaces.msg import SetParametersResult
+#from rcl_interfaces.srv import SetParameters_Response
+from rcl_interfaces.msg._set_parameters_result import SetParametersResult
+from rcl_interfaces.srv._set_parameters import SetParameters_Response
+
+from rcl_interfaces.msg import ParameterType
+from rcl_interfaces.msg import ParameterValue
+
from sensor_msgs.msg import Image as msg_Image
@@ -54,6 +80,11 @@
import tempfile
import os
import requests
+
+def debug_print(*args):
+ if(False):
+ print(*args)
+
class RosbagManager(object):
def __new__(cls):
if not hasattr(cls, 'instance'):
@@ -82,12 +113,13 @@ def init(self):
def get_rosbag_path(self, filename):
if filename in self.rosbag_files:
return self.rosbag_location + "/" + filename
-rosbagMgr = RosbagManager()
+
def get_rosbag_file_path(filename):
+ rosbagMgr = RosbagManager()
path = rosbagMgr.get_rosbag_path(filename)
assert path, "No rosbag file found :"+filename
return path
-
+get_rosbag_file_path.rosbagMgr = None
def CameraInfoGetData(rec_filename, topic):
data = importRosbag(rec_filename, importTopics=[topic], log='ERROR', disable_bar=True)[topic]
@@ -412,6 +444,17 @@ def kill_realsense2_camera_node():
os.system(cmd)
pass
+'''
+function gets all the topics for a camera node
+'''
+
+def get_all_topics(camera_name=None):
+ cmd = 'ros2 topic list'
+ if camera_name!=None:
+ cmd += '| grep ' + camera_name
+ direct_output = os.popen(cmd).read()
+ return direct_output
+
'''
get the default parameters from the launch script so that the test doesn't have to
get updated for each change to the parameter or default values
@@ -459,13 +502,14 @@ def get_params_string_for_launch(params):
camera with a temporary yaml file to hold the parameters.
'''
-def get_rs_node_description(name, params):
+def get_rs_node_description(params):
import tempfile
import yaml
tmp_yaml = tempfile.NamedTemporaryFile(prefix='launch_rs_',delete=False)
params = convert_params(params)
ros_params = {"ros__parameters":params}
- camera_params = {name+"/"+name: ros_params}
+
+ camera_params = {params['camera_namespace'] +"/"+params['camera_name']: ros_params}
with open(tmp_yaml.name, 'w') as f:
yaml.dump(camera_params, f)
@@ -477,9 +521,9 @@ def get_rs_node_description(name, params):
package='realsense2_camera',
#namespace=LaunchConfiguration("camera_name"),
#name=LaunchConfiguration("camera_name"),
- namespace=params["camera_name"],
- name=name,
- #prefix=['xterm -e gdb --args'],
+ namespace=params["camera_namespace"],
+ name=params["camera_name"],
+ #prefix=['xterm -e gdb -ex=run --args'],
executable='realsense2_camera_node',
parameters=[tmp_yaml.name],
output='screen',
@@ -487,6 +531,9 @@ def get_rs_node_description(name, params):
emulate_tty=True,
)
+def get_node_heirarchy(params):
+ return "/"+params['camera_namespace'] +"/"+params['camera_name']
+
'''
This function returns a launch description with three rs nodes that
use the same rosbag file. Test developer can use this as a reference and
@@ -501,11 +548,11 @@ def launch_descr_with_yaml(request):
params[key] = value
if 'camera_name' not in changed_params:
params['camera_name'] = 'camera_with_yaml'
- first_node = get_rs_node_description(params['camera_name'], params)
+ first_node = get_rs_node_description(params)
return LaunchDescription([
first_node,
launch_pytest.actions.ReadyToTest(),
- ]),request.param
+ ]),params
'''
This function returns a launch description with a single rs node instance built based on the parameter
@@ -519,11 +566,11 @@ def launch_descr_with_parameters(request):
params[key] = value
if 'camera_name' not in changed_params:
params['camera_name'] = 'camera_with_params'
- first_node = get_rs_node_description(params['camera_name'], params)
+ first_node = get_rs_node_description(params)
return LaunchDescription([
first_node,
launch_pytest.actions.ReadyToTest(),
- ]),request.param
+ ]),params
'''
This function returns a launch description with a single rs node instance built based on the parameter
@@ -543,11 +590,21 @@ def delayed_launch_descr_with_parameters(request):
period = 2.0
if 'delay_ms' in changed_params.keys():
period = changed_params['delay_ms']/1000
- first_node = get_rs_node_description(params['camera_name'], params)
+ first_node = get_rs_node_description(params)
return LaunchDescription([launch.actions.TimerAction(
actions = [
first_node,], period=period)
- ]),request.param
+ ]),params
+class HzWrapper(ROSTopicHz):
+ def _callback_hz(self, m, topic=None):
+ if self.get_last_printed_tn(topic=topic) == 0:
+ self.set_last_printed_tn(self.get_msg_tn(topic=topic), topic=topic)
+ return self.callback_hz(m, topic)
+ def restart_topic(self, topic):
+ self._last_printed_tn[topic] = 0
+ self._times[topic].clear()
+ self._msg_tn[topic] = 0
+ self._msg_t0[topic] = -1
'''
This is that holds the test node that listens to a subscription created by a test.
@@ -560,6 +617,7 @@ def __init__(self, name='test_node'):
self.data = {}
self.tfBuffer = None
self.frame_counter = {}
+ self._ros_topic_hz = HzWrapper(super(), 10000, use_wtime=False)
def wait_for_node(self, node_name, timeout=8.0):
start = time.time()
@@ -572,14 +630,23 @@ def wait_for_node(self, node_name, timeout=8.0):
return True, ""
time.sleep(timeout/5)
return False, "Timed out waiting for "+ str(timeout)+ "seconds"
- def create_subscription(self, msg_type, topic , data_type, store_raw_data):
- super().create_subscription(msg_type, topic , self.rsCallback(topic,msg_type, store_raw_data), data_type)
+ def reset_data(self, topic):
self.data[topic] = deque()
self.frame_counter[topic] = 0
- if (os.getenv('ROS_DISTRO') != "dashing") and (self.tfBuffer == None):
+ self._ros_topic_hz.restart_topic(topic)
+
+ def create_subscription(self, msg_type, topic , data_type, store_raw_data, measure_hz):
+ self.reset_data(topic)
+ super().create_subscription(msg_type, topic , self.rsCallback(topic,msg_type, store_raw_data), data_type)
+ #hz measurements are not working
+ if measure_hz == True:
+ msg_class = get_msg_class(super(), topic, blocking=True, include_hidden_topics=True)
+ super().create_subscription(msg_class,topic,functools.partial(self._ros_topic_hz._callback_hz, topic=topic),data_type)
+ self._ros_topic_hz.set_last_printed_tn(0, topic=topic)
+
+ if self.tfBuffer == None:
self.tfBuffer = tf2_ros.Buffer()
self.tf_listener = tf2_ros.TransformListener(self.tfBuffer, super())
-
def get_num_chunks(self,topic):
return len(self.data[topic])
@@ -588,14 +655,14 @@ def pop_first_chunk(self, topic):
def image_msg_to_numpy(self, data):
fmtString = data.encoding
- if fmtString in ['mono8', '8UC1', 'bgr8', 'rgb8', 'bgra8', 'rgba8']:
+ if fmtString in ['mono8', '8UC1', 'bgr8', 'rgb8', 'bgra8', 'rgba8', 'yuv422_yuy2', 'yuv422']:
img = np.frombuffer(data.data, np.uint8)
elif fmtString in ['mono16', '16UC1', '16SC1']:
img = np.frombuffer(data.data, np.uint16)
elif fmtString == '32FC1':
img = np.frombuffer(data.data, np.float32)
else:
- print('image format not supported:' + fmtString)
+ print('py_rs_utils.image_msg_to_numpy:image format not supported:' + fmtString)
return None
depth = data.step / (data.width * img.dtype.itemsize)
@@ -609,9 +676,12 @@ def image_msg_to_numpy(self, data):
The processing of data is taken from the existing testcase in scripts rs2_test
'''
def rsCallback(self, topic, msg_type, store_raw_data):
- print("RSCallback")
+ debug_print("RSCallback")
def _rsCallback(data):
- print('Got the callback for ' + topic)
+ '''
+ enabling prints in callback reduces the fps in some cases
+ '''
+ debug_print('Got the callback for ' + topic)
#print(data.header)
self.flag = True
if store_raw_data == True:
@@ -703,23 +773,101 @@ def init_test(self,name='RsTestNode'):
self.flag = False
self.node = RsTestNode(name)
self.subscribed_topics = []
- def create_subscription(self, msg_type, topic, data_type, store_raw_data=False):
+
+ def wait_for_node(self, node_name, timeout=8.0):
+ self.node.wait_for_node(node_name, timeout)
+ def create_subscription(self, msg_type, topic, data_type, store_raw_data=False, measure_hz=False):
if not topic in self.subscribed_topics:
- self.node.create_subscription(msg_type, topic, data_type, store_raw_data)
+ self.node.create_subscription(msg_type, topic, data_type, store_raw_data, measure_hz)
self.subscribed_topics.append(topic)
+ else:
+ self.node.reset_data(topic)
- def spin_for_data(self,themes):
- start = time.time()
+
+
+ def create_param_ifs(self, camera_name):
+ self.set_param_if = self.node.create_client(SetParameters, camera_name + '/set_parameters')
+ self.get_param_if = self.node.create_client(GetParameters, camera_name + '/get_parameters')
+ while not self.get_param_if.wait_for_service(timeout_sec=1.0):
+ print('service not available, waiting again...')
+ while not self.set_param_if.wait_for_service(timeout_sec=1.0):
+ print('service not available, waiting again...')
+
+ def send_param(self, req):
+ future = self.set_param_if.call_async(req)
+ while rclpy.ok():
+ rclpy.spin_once(self.node)
+ if future.done():
+ try:
+ response = future.result()
+ if response.results[0].successful:
+ return True
+ except Exception as e:
+ print("exception raised:")
+ print(e)
+ pass
+ return False
+
+ def get_param(self, req):
+ future = self.get_param_if.call_async(req)
+ while rclpy.ok():
+ rclpy.spin_once(self.node)
+ if future.done():
+ try:
+ response = future.result()
+ return response.values[0]
+ except Exception as e:
+ print("exception raised:")
+ print(e)
+ pass
+ return None
+
+ def set_string_param(self, param_name, param_value):
+ req = SetParameters.Request()
+ new_param_value = ParameterValue(type=ParameterType.PARAMETER_STRING, string_value=param_value)
+ req.parameters = [Parameter(name=param_name, value=new_param_value)]
+ return self.send_param(req)
+
+ def set_bool_param(self, param_name, param_value):
+ req = SetParameters.Request()
+ new_param_value = ParameterValue(type=ParameterType.PARAMETER_BOOL, bool_value=param_value)
+ req.parameters = [Parameter(name=param_name, value=new_param_value)]
+ return self.send_param(req)
+
+ def set_integer_param(self, param_name, param_value):
+ req = SetParameters.Request()
+ new_param_value = ParameterValue(type=ParameterType.PARAMETER_INTEGER, integer_value=param_value)
+ req.parameters = [Parameter(name=param_name, value=new_param_value)]
+ return self.send_param(req)
+
+ def get_integer_param(self, param_name):
+ req = GetParameters.Request()
+ req.names = [param_name]
+ value = self.get_param(req)
+ if (value == None) or (value.type == ParameterType.PARAMETER_NOT_SET):
+ return None
+ else:
+ return value.integer_value
+
+ def spin_for_data(self,themes, timeout=5.0):
'''
timeout value varies depending upon the system, it needs to be more if
the access is over the network
'''
- timeout = 25.0
print('Waiting for topic... ' )
flag = False
+ data_not_expected1 = [i for i in themes if (i["expected_data_chunks"]) == 0]
+ if data_not_expected1 == []:
+ data_not_expected = False
+ else:
+ data_not_expected = True
+ start = time.time()
+ msg = ""
while (time.time() - start) < timeout:
rclpy.spin_once(self.node, timeout_sec=1)
- print('Spun once... ' )
+ debug_print('Spun once... ' )
+ if data_not_expected == True:
+ continue
all_found = True
for theme in themes:
if theme['expected_data_chunks'] > int(self.node.get_num_chunks(theme['topic'])):
@@ -729,27 +877,51 @@ def spin_for_data(self,themes):
flag =True
break
else:
- print("Timed out waiting for", timeout, "seconds" )
- return False, "run_test timedout"
- return flag,""
+ if data_not_expected == False:
+ msg = "Timedout: Data expected, but not received for: "
+ for theme in themes:
+ if theme['expected_data_chunks'] > int(self.node.get_num_chunks(theme['topic'])):
+ msg += " " + theme['topic']
+ msg += " Nodes available: " + str(self.node.get_node_names())
+ return False, msg
+ flag = True
+ return flag,msg
def spin_for_time(self,wait_time):
start = time.time()
- print('Waiting for topic... ' )
+ print('Waiting for time... ' )
flag = False
- while time.time() - start < wait_time:
- rclpy.spin_once(self.node)
- print('Spun once... ' )
+ while (time.time() - start) < wait_time:
+ print('Spun for time once... ' )
+ rclpy.spin_once(self.node, timeout_sec=wait_time)
- def run_test(self, themes):
+ def run_test(self, themes, initial_wait_time=0.0, timeout=0):
try:
for theme in themes:
store_raw_data = False
if 'store_raw_data' in theme:
store_raw_data = theme['store_raw_data']
- self.create_subscription(theme['msg_type'], theme['topic'] , qos.qos_profile_sensor_data,store_raw_data)
+ if 'expected_fps_in_hz' in theme:
+ measure_hz = True
+ else:
+ measure_hz = False
+ qos_type = qos.qos_profile_sensor_data
+ if 'qos' in theme:
+ qos_type = theme['qos']
+ self.create_subscription(theme['msg_type'], theme['topic'] , qos_type,store_raw_data, measure_hz)
print('subscription created for ' + theme['topic'])
- self.flag = self.spin_for_data(themes)
+ '''
+ change the default based on whether data is expected or not
+ '''
+ if timeout == 0:
+ timeout = 5.0
+ data_not_expected1 = [i for i in themes if (i["expected_data_chunks"]) == 0]
+ if data_not_expected1 == []:
+ timeout = 50.0 #high value due to resource constraints in CI
+
+ if initial_wait_time != 0.0:
+ self.spin_for_time(initial_wait_time)
+ self.flag = self.spin_for_data(themes, timeout)
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
@@ -760,8 +932,31 @@ def run_test(self, themes):
else:
print(e)
self.flag =False,e
-
return self.flag
+
+ def get_transform_data(self, data, coupled_frame_ids, is_static=False):
+ tfBuffer = tf2_ros.Buffer()
+ for transform in data.transforms:
+ if is_static:
+ tfBuffer.set_transform_static(transform, "default_authority")
+ else:
+ tfBuffer.set_transform(transform, "default_authority")
+ res = dict()
+ for couple in coupled_frame_ids:
+ from_id, to_id = couple
+ if (tfBuffer.can_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=3e6))):
+ res[couple] = tfBuffer.lookup_transform(from_id, to_id, rclpy.time.Time(), rclpy.time.Duration(nanoseconds=1e6)).transform
+ else:
+ res[couple] = None
+ return res
+ def check_transform_data(self, data, frame_ids, is_static=False):
+ coupled_frame_ids = [xx for xx in itertools.combinations(frame_ids, 2)]
+ res = self.get_transform_data(data, coupled_frame_ids, is_static)
+ for couple in coupled_frame_ids:
+ if res[couple] == None:
+ return False, str(couple) + ": didn't get any tf data"
+ return True,""
+
'''
Please override and use your own process_data if the default check is not suitable.
Please also store_raw_data parameter in the themes as True, if you want the
@@ -769,11 +964,25 @@ def run_test(self, themes):
'''
def process_data(self, themes):
for theme in themes:
+ if theme['expected_data_chunks'] == 0:
+ assert self.node.get_num_chunks(theme['topic']) == 0, "Received data, when not expected for topic:" + theme['topic']
+ continue #no more checks needed if data is not available
+
+ if 'expected_fps_in_hz' in theme:
+ hz = self.node._ros_topic_hz.get_hz(theme['topic'])
+ if hz == None:
+ print("Couldn't measure fps, no of data frames expected are enough for the measurement?")
+ else:
+ speed= 1e9*hz[0]
+ msg = "FPS in Hz of topic " + theme['topic'] + " is " + str(speed) + ". Expected is " + str(theme['expected_fps_in_hz'])
+ print(msg)
+ if (abs(theme['expected_fps_in_hz']-speed) > theme['expected_fps_in_hz']/10):
+ assert False,msg
data = self.node.pop_first_chunk(theme['topic'])
if 'width' in theme:
- assert theme['width'] == data['shape'][0][1] # (get from numpy image the width)
+ assert theme['width'] == data['shape'][0][1], "Width not matched. Expected:" + str(theme['width']) + " & got: " + str(data['shape'][0][1]) # (get from numpy image the width)
if 'height' in theme:
- assert theme['height'] == data['shape'][0][0] # (get from numpy image the height)
+ assert theme['height'] == data['shape'][0][0], "Height not matched. Expected:" + str(theme['height']) + " & got: " + str(data['shape'][0][0]) # (get from numpy image the height)
if 'data' not in theme:
print('No data to compare for ' + theme['topic'])
#print(data)
diff --git a/realsense2_camera/tools/frame_latency/frame_latency.cpp b/realsense2_camera/tools/frame_latency/frame_latency.cpp
index 022e733e49..7f4ebb0da7 100644
--- a/realsense2_camera/tools/frame_latency/frame_latency.cpp
+++ b/realsense2_camera/tools/frame_latency/frame_latency.cpp
@@ -12,11 +12,25 @@
// See the License for the specific language governing permissions and
// limitations under the License.
+// DESCRIPTION: #
+// ------------ #
+// This tool created a node which can be used to calulate the specified topic's latency.
+// Input parameters:
+// - topic_name :
+// - topic to which latency need to be calculated
+// - topic_type :
+// - Message type of the topic.
+// - Valid inputs: {'image','points','imu','metadata','camera_info','rgbd','imu_info','tf'}
+// Note:
+// - This tool doesn't support calulating latency for extrinsic topics.
+// Because, those topics doesn't have timestamp in it and this tool uses
+// that timestamp as an input to calculate the latency.
+//
+
#include
#include
#include
#include
-// Node which receives sensor_msgs/Image messages and prints the image latency.
using namespace rs2_ros::tools::frame_latency;
@@ -28,6 +42,39 @@ FrameLatencyNode::FrameLatencyNode( const std::string & node_name,
{
}
+std::string topic_name = "/camera/color/image_raw";
+std::string topic_type = "image";
+
+template
+void FrameLatencyNode::createListener(std::string topicName, const rmw_qos_profile_t qos_profile)
+{
+ _sub = this->create_subscription(
+ topicName,
+ rclcpp::QoS( rclcpp::QoSInitialization::from_rmw( qos_profile ),
+ qos_profile ),
+ [&, this]( const std::shared_ptr< MsgType> msg ) {
+ rclcpp::Time curr_time = this->get_clock()->now();
+ auto latency = ( curr_time - msg->header.stamp ).seconds();
+ ROS_INFO_STREAM( "Got msg with "<< msg->header.frame_id <<" frame id at address 0x"
+ << std::hex << reinterpret_cast< std::uintptr_t >( msg.get() )
+ << std::dec << " with latency of " << latency << " [sec]" );
+ } );
+}
+
+void FrameLatencyNode::createTFListener(std::string topicName, const rmw_qos_profile_t qos_profile)
+{
+ _sub = this->create_subscription(
+ topicName,
+ rclcpp::QoS( rclcpp::QoSInitialization::from_rmw( qos_profile ),
+ qos_profile ),
+ [&, this]( const std::shared_ptr msg ) {
+ rclcpp::Time curr_time = this->get_clock()->now();
+ auto latency = ( curr_time - msg->transforms.back().header.stamp ).seconds();
+ ROS_INFO_STREAM( "Got msg with "<< msg->transforms.back().header.frame_id <<" frame id at address 0x"
+ << std::hex << reinterpret_cast< std::uintptr_t >( msg.get() )
+ << std::dec << " with latency of " << latency << " [sec]" );
+ } );
+}
FrameLatencyNode::FrameLatencyNode( const rclcpp::NodeOptions & node_options )
: Node( "frame_latency", "/", node_options )
@@ -36,19 +83,30 @@ FrameLatencyNode::FrameLatencyNode( const rclcpp::NodeOptions & node_options )
ROS_INFO_STREAM( "frame_latency node is UP!" );
ROS_INFO_STREAM( "Intra-Process is "
<< ( this->get_node_options().use_intra_process_comms() ? "ON" : "OFF" ) );
- // Create a subscription on the input topic.
- _sub = this->create_subscription< sensor_msgs::msg::Image >(
- "/color/image_raw", // TODO Currently color only, we can declare and accept the required
- // streams as ros parameters
- rclcpp::QoS( rclcpp::QoSInitialization::from_rmw( rmw_qos_profile_default ),
- rmw_qos_profile_default ),
- [&, this]( const sensor_msgs::msg::Image::SharedPtr msg ) {
- rclcpp::Time curr_time = this->get_clock()->now();
- auto latency = ( curr_time - msg->header.stamp ).seconds();
- ROS_INFO_STREAM( "Got msg with address 0x"
- << std::hex << reinterpret_cast< std::uintptr_t >( msg.get() )
- << std::dec << " with latency of " << latency << " [sec]" );
- } );
+
+ topic_name = this->declare_parameter("topic_name", topic_name);
+ topic_type = this->declare_parameter("topic_type", topic_type);
+
+ ROS_INFO_STREAM( "Subscribing to Topic: " << topic_name);
+
+ if (topic_type == "image")
+ createListener(topic_name, rmw_qos_profile_default);
+ else if (topic_type == "points")
+ createListener(topic_name, rmw_qos_profile_default);
+ else if (topic_type == "imu")
+ createListener(topic_name, rmw_qos_profile_sensor_data);
+ else if (topic_type == "metadata")
+ createListener(topic_name, rmw_qos_profile_default);
+ else if (topic_type == "camera_info")
+ createListener(topic_name, rmw_qos_profile_default);
+ else if (topic_type == "rgbd")
+ createListener(topic_name, rmw_qos_profile_default);
+ else if (topic_type == "imu_info")
+ createListener(topic_name, rmw_qos_profile_default);
+ else if (topic_type == "tf")
+ createTFListener(topic_name, rmw_qos_profile_default);
+ else
+ ROS_ERROR_STREAM("Specified message type '" << topic_type << "' is not supported");
}
#include "rclcpp_components/register_node_macro.hpp"
diff --git a/realsense2_camera/tools/frame_latency/frame_latency.h b/realsense2_camera/tools/frame_latency/frame_latency.h
index 653c648578..aad8c81c0e 100644
--- a/realsense2_camera/tools/frame_latency/frame_latency.h
+++ b/realsense2_camera/tools/frame_latency/frame_latency.h
@@ -16,6 +16,25 @@
#include
#include "sensor_msgs/msg/image.hpp"
+#include "sensor_msgs/msg/imu.hpp"
+#include "sensor_msgs/msg/point_cloud2.hpp"
+
+#include
+#include
+#include "realsense2_camera_msgs/msg/imu_info.hpp"
+#include "realsense2_camera_msgs/msg/extrinsics.hpp"
+#include "realsense2_camera_msgs/msg/metadata.hpp"
+#include "realsense2_camera_msgs/msg/rgbd.hpp"
+#include "realsense2_camera_msgs/srv/device_info.hpp"
+#include
+#include
+
+#include
+#include
+#include
+#include
+#include
+
namespace rs2_ros {
namespace tools {
@@ -31,8 +50,14 @@ class FrameLatencyNode : public rclcpp::Node
const rclcpp::NodeOptions & node_options
= rclcpp::NodeOptions().use_intra_process_comms( true ) );
+ template
+ void createListener(std::string topicName, const rmw_qos_profile_t qos_profile);
+
+ void createTFListener(std::string topicName, const rmw_qos_profile_t qos_profile);
+
private:
- rclcpp::Subscription< sensor_msgs::msg::Image >::SharedPtr _sub;
+ std::shared_ptr _sub;
+
rclcpp::Logger _logger;
};
} // namespace frame_latency
diff --git a/realsense2_description/meshes/d405.stl b/realsense2_description/meshes/d405.stl
new file mode 100644
index 0000000000..413d260f6d
Binary files /dev/null and b/realsense2_description/meshes/d405.stl differ
diff --git a/realsense2_description/urdf/_d405.urdf.xacro b/realsense2_description/urdf/_d405.urdf.xacro
new file mode 100644
index 0000000000..5ca17f2116
--- /dev/null
+++ b/realsense2_description/urdf/_d405.urdf.xacro
@@ -0,0 +1,165 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/realsense2_description/urdf/_l515.urdf.xacro b/realsense2_description/urdf/_l515.urdf.xacro
deleted file mode 100644
index c761ddf734..0000000000
--- a/realsense2_description/urdf/_l515.urdf.xacro
+++ /dev/null
@@ -1,213 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/realsense2_description/urdf/_r430.urdf.xacro b/realsense2_description/urdf/_r430.urdf.xacro
index d4ea736bc8..883523359d 100644
--- a/realsense2_description/urdf/_r430.urdf.xacro
+++ b/realsense2_description/urdf/_r430.urdf.xacro
@@ -32,7 +32,6 @@ aluminum peripherial evaluation case.
publishing TF values with actual calibrated camera extrinsic values -->
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/realsense2_description/urdf/test_l515_camera.urdf.xacro b/realsense2_description/urdf/test_d405_camera.urdf.xacro
similarity index 50%
rename from realsense2_description/urdf/test_l515_camera.urdf.xacro
rename to realsense2_description/urdf/test_d405_camera.urdf.xacro
index 6b1c3354a0..5a2b313f5d 100644
--- a/realsense2_description/urdf/test_l515_camera.urdf.xacro
+++ b/realsense2_description/urdf/test_d405_camera.urdf.xacro
@@ -1,12 +1,10 @@
-
-
-
+
-
+
-
+