Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How is the depth mentioned in D435 calculated? #13564

Open
wenmingxiaohuo opened this issue Dec 2, 2024 · 4 comments
Open

How is the depth mentioned in D435 calculated? #13564

wenmingxiaohuo opened this issue Dec 2, 2024 · 4 comments
Labels

Comments

@wenmingxiaohuo
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model { R200 / F200 / SR300 / ZR300 / D400 }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Win (8.1/10) / Linux (Ubuntu 14/16/17) / MacOS
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC/Raspberry Pi/ NVIDIA Jetson / etc..
SDK Version { legacy / 2.. }
Language {C/C#/labview/nodejs/opencv/pcl/python/unity }
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

<Describe your issue / question / feature request / etc..>

@MartyG-RealSense
Copy link
Collaborator

Hi @wenmingxiaohuo Pages 18 and 19 of the currenr edition of the data sheet for RealSense 400 Series cameras provides an explanation will illustration images.

https://dev.intelrealsense.com/docs/intel-realsense-d400-series-product-family-datasheet

To quote that section of the data sheet:


"The Intel RealSense D400 series depth camera uses stereo vision to calculate depth. The stereo vision implementation consists of a left imager, right imager, and an optional infrared projector. The infrared projector projects a non-visible
static IR pattern to improve depth accuracy in scenes with low texture".

"The left and right imagers capture the scene and send imager data to the depth imaging (vision) processor, which calculates depth values for each pixel in the image by correlating points on the left image to the right image and via the shift between a point on the Left image and the Right image. The depth pixel values are processed to generate a depth frame. Subsequent depth frames create a depth video stream."


Intel's Beginner's Guide to Depth article may also be a helpful reference.

https://www.intelrealsense.com/beginners-guide-to-depth/

@wenmingxiaohuo
Copy link
Author

Thank you very much for your detailed answer. If I would like to ask what is the sampling frequency of the D435, and also if I want to synchronize the data acquisition with the D435, Lidar L515, how do I set the synchronization information?

@MartyG-RealSense
Copy link
Collaborator

I will reply at the issue #13567 that you created. Thanks very much!

@MartyG-RealSense
Copy link
Collaborator

Do you require further assistance with this case, please? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants