Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to get point cloud from server by using ethernet? #11027

Open
xuritian317 opened this issue Oct 26, 2022 · 5 comments
Open

how to get point cloud from server by using ethernet? #11027

xuritian317 opened this issue Oct 26, 2022 · 5 comments

Comments

@xuritian317
Copy link

xuritian317 commented Oct 26, 2022

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model L515
Firmware Version
Operating System & Version Host PC: Ubuntu Mate 20.04, Server: Ubuntu Server 20.04
Kernel Version (Linux Only) 5.8
Platform Raspberry Pi 4
SDK Version 2.51.1
Language python
Segment

Issue Description

I want to get PointCloud generated in Host PC by using ethernet to send data. I did some work.
I succeeded in creating a Server and Client as this page saying: https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/ethernet_client_server. EtherSenseClient (Host PC, Ubuntu Mate 20.04) can get depth data from EtherSenseServer (Server, Raspberry Pi 4, Ubuntu Server 20.04) and showing image in Host PC.
But in the above paper, it needed to letting depth frame (depth_frame = frames.get_depth_frame()) tranlated to nparray and use pickle.dumps(depth) to serialize it. Then, Sending it to Client and in there using pickle.loads(self.buffer) to get nparray and show it by using cv2.imshow. So, the question is: if I want to get PointCloud, how to get depth frame form nparry sent or get PointCloud by using other ways?
I saw some issue about it, like #6535, #5296. But I did not find some ways to solve it. So can you help me?

@xuritian317 xuritian317 changed the title how to get point cloud from server using ethernet? how to get point cloud from server by using ethernet? Oct 26, 2022
@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 26, 2022

Hi @xuritian317 It sounds as though you wish to convert a numpy (np) array back to the RealSense SDK's rs2::frame format so that the depth data can be converted into a pointcloud using RealSense SDK instructions. Is that correct, please?

If this is your goal then although other RealSense Python users have attempted to do so, it remains an unsolved problem as far as I am aware, although converting cv::mat to rs2::frame may be possible. More information about this can be found at #10770 (comment) and the further reading links at the bottom of that page.

The EtherSense system is quite old now and a newer RealSense networking system that works well with Raspberry Pi 4 was subsequently introduced. It enables a camera to be treated as a 'network camera' over an ethernet connection.

https://dev.intelrealsense.com/docs/open-source-ethernet-networking-for-intel-realsense-depth-cameras

@xuritian317
Copy link
Author

xuritian317 commented Oct 26, 2022

Thanks for you quickly reply.
The newer RealSense networking system have been tested and it work by using realsense_viewer. But it can not export pointcolud file like Intel.RealSense.Viewer.exe in releases asset. So I want to get pointcolud file using coding.
The reason why using the EtherSense system was because it report no module named 'pyrealsense2_net' when I want to run net_viewer.py and did not solved it by #9946 (comment). Besides, I did not want to select C++ code.
So, What should I do next?

@MartyG-RealSense
Copy link
Collaborator

How about calculating the pointcloud on the Pi using the pc.calculate and map_to instructions like in the Python example program opencv_pointcloud_viewer.py and then generating the cv image and sending it to the host to be displayed on the host with imshow? Would that be feasible?

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_pointcloud_viewer.py#L305-L306

@xuritian317
Copy link
Author

xuritian317 commented Oct 27, 2022

Thanks. I focused on opencv_pointcloud_viewer.py and noticed that points can be displayed after tranlating points.get_vertices() to numpy array. But, I want to get a PointCloud file, like *.ply or *.pcd. In opencv_pointcloud_viewer.py, it still requires original depth_frame using points.export_to_ply('./out.ply', mapped_frame). So, What to do next?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 27, 2022

I researched your question further but did not find many possibilities for a potential solution, unfortunately.

Instead of using the SDK's default rs2::frame frameset structure, it may be worth trying to define your own custom frameset using the SDK's custom processing block system. An example of a Python script for defining a custom frameset to use instead of rs2::frame is at #5847 (comment)

The ideal strategy would likely be to use the SDK's software_device() interface to perform the data conversion, though it is known to have issues on Python compared to its use in C++. An example of such a case of problems with software_device in Python is at the link below.

https://support.intelrealsense.com/hc/en-us/community/posts/1500000934242/comments/1500000819702

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants