-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ros_openpose with ZED2 camera #9
Comments
Hi @ruksen25 Thank you very much.
Perfect, this is what I would suggest in the first place.
The visualization is done by python script A similar question was asked previously, please go through it. Check here, please. You can find the discussion on the mentioned URL very useful. PS: If you can share a screenshot of the RVIZ window, we can discuss it better. You can email me the screenshot if privacy is concerned! Let me know if the problem persists. |
@ravijo If I may I have a few questions to the input you use from the realsense: Explanation of the images: Thank you for taking the time. I appreciate your help. |
Hi @ruksen25 Thank you very much for the information. The following information is required from the camera-
The trick here is that depth images are aligned to the color images. Also depth and color are having the same dimension. In this way, each pixel from the color image can be mapped to corros[doing depth pixel at the same x, y location. The Please see
It is in mm. You can see here I am converting mm into m (SI Units).
16 Bits. See here
I don't remember right now. However, you can find this information online. The topic
No. Not required. Let me know the progress, please. |
Thank you very much for your help. I think I have it running now. Turns out I had to change a parameter in the ROS-ZED-camera node to get depth in mm. Again thank you for your time. And thank you for making this wrapper available. |
Hi @ruksen25 Thank you very much. I am glad you made it work! Since I have not used a ZED camera, I wasn't aware of its parameters. I am sorry for the inconvenience. The depth images are mostly stored in 16 Bit (Kinect also follows the same convention).
After receiving this information, I will update (and create Looking forward to hearing from you. |
No worries! I did not expect you to know about this problem. As I already thought I tried to make these changes I thought this was not the issue and that the problem was somewhere else in the interfacing with your code. I knew that my questions were a bit out of your scope, but I thought you might have some inputs that could hint me towards the solution. I very much appreciate your efforts in helping me find a solution. I already was thinking about making my zed2 solution available and thought of creating a git side on my own with the modified files and instructions. but maybe it is nicer to have everything together in your repository. Maybe I can write out a section describing the use of the zed camera and send you that and the files I have created (modified from yours), which you can then add to the repository? |
Hi @ruksen25
Thank you very much.
Perfect. I agree! |
Hi @ruksen25, I changed the openni_depth_mode: true While testing with the intel camera, it seems to be fine. Whereas testing the same with ZED2, I have trouble at the moment.
Thanks |
Hope it helps! |
Hi @ravijo , Thanks |
Has your issue been resolved? Otherwise I have a suspicion that the problem is that Stereolabs have changed the name of the depth topic published when the openni mode is used. Try to change the depth_topic parameter in the config_zed2.launch file to depth_registered. Thus, change it from:
to
If this tip solves your issue, please let us know. Then @ravijo can update the launch file. Best regards |
Hi ravijo
I am working on getting this package working with my ZED2 camera.
I have copied the config_realsense. launch file and tried to adapt it to my camera.
I also similarly copied and modified the rviz file.
Currently, I seem to be able to obtain a skeleton, and I can visualise in Rviz. However, the visualisation appears all weird. I can visualise my point cloud just fine, but of course this has also not been through manipulation in your wrapper. It is mainly the skeleton which is visualised weird.
The scale of it does not match the point cloud (or vice versa) , and I even in some cases seem to have a deformed skeleton.
My question to you is if you have some parameters or variables in any parts of your wrapper that you think might be associated with the problem. Or maybe asked in another way: Can you line out the parameters/variables in your code that you think I need to look at and adapt in order to make it work with a different depth camera?
I greatly appreciate any input you have.
The text was updated successfully, but these errors were encountered: