Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get actual distance of 2 points in a picture in android #9466

Closed
dota2mhxy opened this issue Jul 22, 2021 · 11 comments
Closed

get actual distance of 2 points in a picture in android #9466

dota2mhxy opened this issue Jul 22, 2021 · 11 comments

Comments

@dota2mhxy
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info get distance of 2 points in a picture
Camera Model D435
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version Android
Kernel Version (Linux Only) 4.4.2
Platform Android 8
SDK Version 2.48.0
Language Java
Segment Smartphone

Issue Description

Hello, I want to get the actual distance of 2 points in a picture in java language in android, I saw it in c++ rs-measure, and I also searched a bit by myself. See issues/4601, issues/4796
You gave a piece of workable code

try(FrameReleaser fr = new FrameReleaser()){
FrameSet frames = mPipeline.waitForFrames().releaseWith(fr);
FrameSet processedSet = frames.applyFilter(mDecimationFilter).releaseWith(fr).
applyFilter(mTemporalFilter).releaseWith(fr).
applyFilter(mSpatialFilter).releaseWith(fr).
applyFilter(mPointcloud).releaseWith(fr);
Frame processed = processedSet.first(StreamType.DEPTH, StreamFormat.XYZ32F).releaseWith(fr);
Frame texture = processedSet.first(StreamType.COLOR, StreamFormat.RGB8).releaseWith(fr);
Points points = processed.as(Extension.POINTS);
float[] ver = points.getVertices();
float[] texMap = points.getTextureCoordinates();

But I don’t understand the detailed process. Where is the point cloud data? Is it ver and texmap array? What is the use of texture? Thank you for your help.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 22, 2021

Hi @dota2mhxy My understanding from #4796 is that in the Android wrapper the point cloud can be generated by providing color texture from the RGB stream to a point cloud filter (mPointcloud), and that points.get_vertices() retrieves the vertices of the point cloud generated by the filter.

Further information about the workings of points.get_vertices() can be found in the link below.

#1783

@MartyG-RealSense
Copy link
Collaborator

Hi @dota2mhxy Do you require further assistance with this case, please? Thanks!

@dota2mhxy
Copy link
Author

Hello, I want to get the 3D coordinates of any 2 points in the picture.Now there are some problems
image

This code can run but for the three points I take, there are always 1 or 2 points. The X, y and Z of the depth information are all 0, and the color 3D coordinate value obtained by conversion becomes very small. Then the 3D coordinate value corresponding to this pixel is unavailable.
Timage

timage
then i change the method ,run this line of code and don't understand these two variables,
The ver array is the three-dimensional coordinates x, y, Z of each point. But how do I match the color stream? Is the texmap array the coordinates of the color stream? Why does each point have only two values? Only x, y coordinates?
More importantly, I want to get the corresponding relationship between pixels and points. In short, in bitmap, mat, I can determine a pixel through X and Y. now I want to obtain the three-dimensional coordinates of this pixel.

@dota2mhxy dota2mhxy reopened this Aug 4, 2021
@MartyG-RealSense
Copy link
Collaborator

Most of your questions are outside of my knowledge of Java unfortunately. The example librealsense Java code highlighted in the link below may provide you with some useful insights though.

https://github.com/bytedeco/javacpp-presets/blob/master/librealsense/src/gen/java/org/bytedeco/librealsense/intrinsics.java#L37-L47

@dota2mhxy
Copy link
Author

Sorry to bother you again, I ran this code
image
But I only have the value of the ver array, the value of the texMap array is all 0.
image
and i look this code
image
float x = (float)Math.round(textureCoordinates[2 * i] * (float)w);
float y = (float)Math.round(textureCoordinates[2 * i + 1] * (float)h);
i think this x, y can determine the position of the pixel. For example, find a specific pixel at a resolution of 640*480. I don’t know if this is correct. Then texMap array should not be all 0. Is there a problem with my code? Thank you for your reply

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 6, 2021

In the Java point cloud case #4796 that was linked to earlier, Android wrapper developer matkatz suggests at the bottom of his script in #4796 (comment) to "Check createTexture (line 90) in GLPointsFrame for example of how to align the points with the texture".

The GLPointsFrame code reference that matkatz mentioned can be found at the link below.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/android/librealsense/src/main/java/com/intel/realsense/librealsense/GLPointsFrame.java#L90

Have you included this alignment between points and texture in your project please?

@dota2mhxy
Copy link
Author

Hello, there is a problem.
cfg.enableStream(StreamType.DEPTH, 640, 480);
Is this line of code to set the length and width resolution of the picture? Can I set it a little larger? I use 1920 * 1080, and there is an error @MartyG-RealSense

@dota2mhxy
Copy link
Author

     Hello, there is one more question, how do I save and reload the data of the depth stream, I can get a frame of the color stream, convert the bitmap, and save it into a picture format. There is no problem in reading the picture. But I don’t understand Deep Stream’s data。

byte[] depthFrameData2 = new byte[depthFrame.getDataSize()];
depthFrame.getData(depthFrameData2);
With this line of code, I can get the data of one frame of the depth stream. It seems that each pixel has 2 values, but what do these 2 values represent? How can I save it locally. At the same time, how do I read the depth file saved locally.
In addition, I first use the camera to save a frame of color stream and depth stream to the local hard disk, can I read them, align and convert them. Like the following code
Pixel depth_pixel1 = Utils.project2dPixelToDepthPixel(depthFrame, mDepthScale, depthMin, depthMax,
depthFrameIntrinsic, colorFrameIntrinsic, colorToDepthExtrinsic, depthToColorExtrinsic, color_pixel1);
float depthAtMiddleOfFrame1 = depthFrame.getDistance((int) depth_pixel1.mX, (int) depth_pixel1.mY);
Point_3D depth_point1 = Utils.deprojectPixelToPoint(depthFrameIntrinsic, depth_pixel1, depthAtMiddleOfFrame1);

@MartyG-RealSense
Copy link
Collaborator

The two numbers - for example, 640,480 - represent width, length. You can only use resolutions that are supported by the RealSense SDK though for a particular camera model. Tables containing this information can be found on pages 63-65 of the current editio of the data sheet document for the 400 Series cameras.

https://dev.intelrealsense.com/docs/intel-realsense-d400-series-product-family-datasheet

For depth, the maximum supported resolution is 1280x720. This would be represented in code as:

cfg.enableStream(StreamType.DEPTH, 1280, 720);

image


In regard to your second question, there are scripting references about exporting a depth frame as a bitmap image from Android at #8551 (comment)

I do not have a reference for aligning a pair of saved bitmap color and depth images on Android in Java though. It is awkward to achieve even in C++ language (potentially you could access librealsense's C++ functions from your Android app using NDK code). Aligning depth and color in real-time and then saving a combined aligned image - like in #7081 - is also complicated.

Information on pixel coordinates is provided in the Projection documentation of the SDK.

https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20#section-pixel-coordinates

@dota2mhxy
Copy link
Author

Hello, I changed the resolution to 1280*720, a new problem occurred
image

@MartyG-RealSense
Copy link
Collaborator

The USB port may be having difficulty coping with the increased amount of data from using 1280x720 resolution instead of 640x480. Do you still experience the error if you use 848x480 resolution?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants