You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When simulating a dynamically moving kinect camera, the colors in the depth pointclouds do not match the camera image: that is, a random part of the image is mapped to the pointcloud.
This is because the driver does not actually check that the sensor image and depth image match timestamps. This is reproduceable across Ubuntu 14.04, Ubuntu 16.04, OS X 10.11.2., and gazebos 2 through 7.4, with both indigo and kinetic.
I instrumented the callbacks to the depth and sensor image to echo the ns part of their timestamps when they are called:
When simulating a dynamically moving kinect camera, the colors in the depth pointclouds do not match the camera image: that is, a random part of the image is mapped to the pointcloud.
This is because the driver does not actually check that the sensor image and depth image match timestamps. This is reproduceable across Ubuntu 14.04, Ubuntu 16.04, OS X 10.11.2., and gazebos 2 through 7.4, with both indigo and kinetic.
I instrumented the callbacks to the depth and sensor image to echo the ns part of their timestamps when they are called:
As can be seen, the depth callback always comes first, so that the pointcloud is always matched to the previous RGB/grayscale image.
I hacked a quick solution in which stores the pointer to the latest depth image, and only calls FillPointCloud when the timestamps between the two match, here and again in the sensor callback:
https://github.com/ros-simulation/gazebo_ros_pkgs/blob/indigo-devel/gazebo_plugins/src/gazebo_ros_openni_kinect.cpp#L204-L211
But this is an ugly hack, but fixes it. Can upload my code to a branch if anyone is curious.
The text was updated successfully, but these errors were encountered: