Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All the official alignment code(rs-align/align-depth2color) didn't work well #5282

Closed
Tom127 opened this issue Nov 20, 2019 · 8 comments
Closed

Comments

@Tom127
Copy link

Tom127 commented Nov 20, 2019

align-depth2color.py:

align-depth2color py

rs-align.cpp:

rs-align cpp

realsense_viewer:

realsence_viewer

I tried all the possible alignment code(depth to color), but they didn't work well. Thanks for your help! Attaching my realsense device information:

rs-enumerate-devices-output.txt

@ev-mp
Copy link
Collaborator

ev-mp commented Nov 20, 2019

@Tom127 hi,
There are some minor artifacts recognizable e.g. the blue line under the chin in the second picture, still within the calibration tolerances. Other than that the aligned images seem to be coherent.
Can you be more specific?

@Tom127
Copy link
Author

Tom127 commented Nov 21, 2019

@ev-mp Thanks for you reply!
Taking the first picture as example, there are two silhouettes(contours) in the depth image. Corresponding to three codes(align-depth2color.py/ rs-align.cpp/ realsense_viewer), all three pictures have the same situation, does it mean the bad alignment? What's the problem?

align-depth2color py

The second image is not quadrilateral, did it mean a image distortion?

rs-align cpp2

@ev-mp
Copy link
Collaborator

ev-mp commented Nov 21, 2019

@Tom127 hello,

  1. The black strap that separates the contours of chin, shoulder and their projections on the wall is the manifestation of occlusion (D435 Color and Depth misalignment #2445).

  2. The red line marked as distortion - the aligned images for D400 and SR300 devices shall always have their left and right sides "corroded" due to the physical horizontal displacement between the Depth and RGB sensors (Color glitch in point cloud and realsense-viewer #2355).
    The alignment process requires both RGB and Depth data to be present, and the pixels found in those non-overlapping regions are eliminated by design. (Align to Depth produces lossy color image #5030).

@Tom127
Copy link
Author

Tom127 commented Nov 25, 2019

@ev-mp Thanks for your reply!
Do you mean that the black strap is not caused by misalignment and will not affect the results?
So I can use depth.get_distance(x, y) directly to get the depth information corresponding to the pixel coordinates of the color image without recalibration or other settings?

Best regards

@ev-mp
Copy link
Collaborator

ev-mp commented Nov 26, 2019

To be more specific - the black strap in the second image in the initial post is not misalignment. On the contrary - this is a filtered region for which there was no valid depth data due to occlusion.
The meaning of will not affect the results? wasn't clear as how you assess the results, but at least it is guaranteed that there will be no false positive results.

So I can use depth.get_distance(x, y) directly to get the depth information corresponding to the pixel coordinates of the color image without recalibration or other settings?

There are two types of alignment: depth to 2D (rgb/ir) and 2D to depth. So the phrase is almost* accurate when applied to the latter case. In the first scenario, when aligning depth to color, the resulted depth frame most probably will have less than 100% valid pixels, and consequently a subset of valid RGB pixels will have no corresponding depth data in the aligned image.
*Occlusion generates false correspondences between RGB and Depth. Follow the link above for details

@ev-mp
Copy link
Collaborator

ev-mp commented Dec 9, 2019

@Tom127, do you need further assistance?
Please update

@Tom127
Copy link
Author

Tom127 commented Dec 14, 2019

@ev-mp ,sorry for the late response! I've been busy training models recently.
In my case, I already detected an object in a color image, and want to get the corresponding 3D coordinates by the pixel coordinates of the corners of the object, how should I do?
If I test the existing calibration results and the corresponding 3D coordinates cannot be obtained accurately from the pixel coordinates, how can I recalibrate the camera and modify the internal and external parameters in Ubuntu 16.04?
Thanks for help!

@RealSenseCustomerSupport
Copy link
Collaborator


Hi @Tom127,

You can get the corresponding point in 3D using rs2_deproject_pixel_to_point function.

For camera calibration please use Intel.Realsense.DynamicCalibrator from CalibrationTool package:
https://downloadcenter.intel.com/download/28517/Intel-RealSense-D400-Series-Calibration-Tools-and-API

Please pay attention that Intel.Realsense.DynamicCalibrator is optimizing extrinsic parameters only.
Intrinsic parameters are not dynamically calibrated.
Inside the same package you will find Intel.Realsense.CustomRW that allows you reset device
calibration to default gold settings, write calibration parameters to device and dump the
calibration data from device.
Please let us know if you need any additional clarifications regarding this topic.

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants