-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to get point cloud from server by using ethernet? #11027
Comments
Hi @xuritian317 It sounds as though you wish to convert a numpy (np) array back to the RealSense SDK's rs2::frame format so that the depth data can be converted into a pointcloud using RealSense SDK instructions. Is that correct, please? If this is your goal then although other RealSense Python users have attempted to do so, it remains an unsolved problem as far as I am aware, although converting cv::mat to rs2::frame may be possible. More information about this can be found at #10770 (comment) and the further reading links at the bottom of that page. The EtherSense system is quite old now and a newer RealSense networking system that works well with Raspberry Pi 4 was subsequently introduced. It enables a camera to be treated as a 'network camera' over an ethernet connection. |
Thanks for you quickly reply. |
How about calculating the pointcloud on the Pi using the pc.calculate and map_to instructions like in the Python example program opencv_pointcloud_viewer.py and then generating the cv image and sending it to the host to be displayed on the host with imshow? Would that be feasible? |
Thanks. I focused on |
I researched your question further but did not find many possibilities for a potential solution, unfortunately. Instead of using the SDK's default rs2::frame frameset structure, it may be worth trying to define your own custom frameset using the SDK's custom processing block system. An example of a Python script for defining a custom frameset to use instead of rs2::frame is at #5847 (comment) The ideal strategy would likely be to use the SDK's software_device() interface to perform the data conversion, though it is known to have issues on Python compared to its use in C++. An example of such a case of problems with software_device in Python is at the link below. https://support.intelrealsense.com/hc/en-us/community/posts/1500000934242/comments/1500000819702 |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
I want to get PointCloud generated in Host PC by using ethernet to send data. I did some work.
I succeeded in creating a Server and Client as this page saying: https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/ethernet_client_server. EtherSenseClient (Host PC, Ubuntu Mate 20.04) can get depth data from EtherSenseServer (Server, Raspberry Pi 4, Ubuntu Server 20.04) and showing image in Host PC.
But in the above paper, it needed to letting depth frame (
depth_frame = frames.get_depth_frame()
) tranlated to nparray and usepickle.dumps(depth)
to serialize it. Then, Sending it to Client and in there usingpickle.loads(self.buffer)
to get nparray and show it by usingcv2.imshow
. So, the question is: if I want to get PointCloud, how to get depth frame form nparry sent or get PointCloud by using other ways?I saw some issue about it, like #6535, #5296. But I did not find some ways to solve it. So can you help me?
The text was updated successfully, but these errors were encountered: