Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get multiple pointcloud mapped with infra1 and infra2? #3271

Open
DY-JANG0812 opened this issue Dec 6, 2024 · 6 comments
Open

How to get multiple pointcloud mapped with infra1 and infra2? #3271

DY-JANG0812 opened this issue Dec 6, 2024 · 6 comments
Labels

Comments

@DY-JANG0812
Copy link

DY-JANG0812 commented Dec 6, 2024


Required Info
Camera Model { D435f }
Firmware Version (05.13.00.55)
Operating System & Version { Ubuntu 20.04}
Kernel Version (Linux Only) (e.g. 5.19)
Platform PC
Librealsense SDK Version { 2.50 }
Language {C++}
Segment {Robot }
ROS Distro noetic }
RealSense ROS Wrapper Version {4.51.1, 4.54.1, etc..}

Issue Description

Hello, I am writing this because I have a question.

I am using RealSense and the ROS wrapper to get two point clouds from a single frame, each mapped to two infrared 1 ,2 images.

However, it doesn’t work.

I was able to extract two depth streams successfully using the pointcloud filter, but both point clouds are mapped to the infrared image from sensor 2.

An error message saying "(Infrared, 0) sensor isn't supported by current device! -- Skipping..." appears, but the topic /camera0/infra1/image_rect_raw is being published and works perfectly fine.

I have searched through several issues in the repository, but even though I am using USB 3.0, I couldn’t find anyone experiencing a similar issue.

Is it not possible to use both infrared streams (either due to software or hardware limitations)?

If you know of a good way to achieve this, I would greatly appreciate your help (Directly calibrating the two image topics with depth would require too much time and effort...).

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 6, 2024

Hi @DY-JANG0812 The Infrared 2 topic would be unavailable if the ROS launch was detecting your camera as being on a USB 2.1 connection, even if it was plugged into a USB 3.0 port. This is because Infrared 2 is only supported on a USB 3 connection.

You may be able to achieve satisfactory results (though generating a single pointcloud instead of one for each infrared sensor) simply by enabling the pointcloud filter by adding pointcloud.enable:=true to your launch instruction though. For example:

ros2 launch realsense2_camera rs_launch.py pointcloud.enable:=true

There is no need for you to do manual work to calibrate the two infrared topics to depth. A RealSense depth frame is generated inside the camera hardware from raw left and right infrared frames (not the Infra2 and Infra2 topics) before the frames are even sent along the USB cable to the computer.

Also, the left infrared camera has the benefit of always being pixel-perfect aligned, calibrated, and overlapped with the depth map, and also perfectly time-synchronized.

@DY-JANG0812
Copy link
Author

Hi @MartyG-RealSense

Thank you for the quick reply. It’s great news for me that the image obtained from the left infrared camera is aligned to the depth coordinate system.

Now, I have one more question: If I use the right infrared image without aligning it to the depth coordinate system, does that mean the point cloud mapped to the right IR image might have a coordinate system that is different from the actual right IR image?

@DY-JANG0812
Copy link
Author

@MartyG-RealSense When referring to the left side, is it from the perspective of facing the camera from the front, or from looking at the back of the camera?

@MartyG-RealSense
Copy link
Collaborator

Because the 0,0,0 origin point of depth is the center-line of the left infrared sensor, when depth is aligned to infrared the origin point will still be the left infrared sensor and so the same coordinate system will be used.

When depth is aligned to color the origin of depth changes to the center-line of the RGB sensor.

The camera uses the perspective of looking forwards from behind the back of the camera. This is why when looking at the camera from the front, the left infrared sensor is on the right-side of the camera.

@DY-JANG0812
Copy link
Author

I'm currently using a RealSense camera for obstacle detection, but I'm having trouble extracting correct depth points(NOISE) from acrylic surfaces. I wanted to use infrared textures to remove incorrect depth points, but the issue seems to be that when the acrylic is warped or when infrared light is strongly detected or NOT reflected to camera, the depth calculation is incorrect because the light is not properly detected. I'm considering using filters like hole filling filtering or toggling the emitter on/off. Is there an internal feature that ensures only the filled depth points are outputted during hole filling, instead of noise points?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 9, 2024

As you are using ROS, the best way to strengthen the checking of depth values for confidence in their accuracy and excluding values with low confidence in their accuracy would likely be to load in a json camera configuration file (such as 'high_accuracy') using the ROS parameter json_file_path, as discussed at #2445

I would recommend 'medium_density' over the high_accuracy preset file, as medium_density provides a good balance between accuracy and the amount of detail on the depth image, whilst high_density tends to over-strip the depth detail when eliminating low-confidence values and therefore leaving the image looking sparse.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants