-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
get actual distance of 2 points in a picture in android #9466
Comments
Hi @dota2mhxy My understanding from #4796 is that in the Android wrapper the point cloud can be generated by providing color texture from the RGB stream to a point cloud filter (mPointcloud), and that points.get_vertices() retrieves the vertices of the point cloud generated by the filter. Further information about the workings of points.get_vertices() can be found in the link below. |
Hi @dota2mhxy Do you require further assistance with this case, please? Thanks! |
Most of your questions are outside of my knowledge of Java unfortunately. The example librealsense Java code highlighted in the link below may provide you with some useful insights though. |
In the Java point cloud case #4796 that was linked to earlier, Android wrapper developer matkatz suggests at the bottom of his script in #4796 (comment) to "Check createTexture (line 90) in GLPointsFrame for example of how to align the points with the texture". The GLPointsFrame code reference that matkatz mentioned can be found at the link below. Have you included this alignment between points and texture in your project please? |
Hello, there is a problem. |
byte[] depthFrameData2 = new byte[depthFrame.getDataSize()]; |
The two numbers - for example, 640,480 - represent width, length. You can only use resolutions that are supported by the RealSense SDK though for a particular camera model. Tables containing this information can be found on pages 63-65 of the current editio of the data sheet document for the 400 Series cameras. https://dev.intelrealsense.com/docs/intel-realsense-d400-series-product-family-datasheet For depth, the maximum supported resolution is 1280x720. This would be represented in code as: cfg.enableStream(StreamType.DEPTH, 1280, 720); In regard to your second question, there are scripting references about exporting a depth frame as a bitmap image from Android at #8551 (comment) I do not have a reference for aligning a pair of saved bitmap color and depth images on Android in Java though. It is awkward to achieve even in C++ language (potentially you could access librealsense's C++ functions from your Android app using NDK code). Aligning depth and color in real-time and then saving a combined aligned image - like in #7081 - is also complicated. Information on pixel coordinates is provided in the Projection documentation of the SDK. https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20#section-pixel-coordinates |
The USB port may be having difficulty coping with the increased amount of data from using 1280x720 resolution instead of 640x480. Do you still experience the error if you use 848x480 resolution? |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
Hello, I want to get the actual distance of 2 points in a picture in java language in android, I saw it in c++ rs-measure, and I also searched a bit by myself. See issues/4601, issues/4796
You gave a piece of workable code
try(FrameReleaser fr = new FrameReleaser()){
FrameSet frames = mPipeline.waitForFrames().releaseWith(fr);
FrameSet processedSet = frames.applyFilter(mDecimationFilter).releaseWith(fr).
applyFilter(mTemporalFilter).releaseWith(fr).
applyFilter(mSpatialFilter).releaseWith(fr).
applyFilter(mPointcloud).releaseWith(fr);
Frame processed = processedSet.first(StreamType.DEPTH, StreamFormat.XYZ32F).releaseWith(fr);
Frame texture = processedSet.first(StreamType.COLOR, StreamFormat.RGB8).releaseWith(fr);
Points points = processed.as(Extension.POINTS);
float[] ver = points.getVertices();
float[] texMap = points.getTextureCoordinates();
But I don’t understand the detailed process. Where is the point cloud data? Is it ver and texmap array? What is the use of texture? Thank you for your help.
The text was updated successfully, but these errors were encountered: