-
Notifications
You must be signed in to change notification settings - Fork 466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix transform issue in point cloud display plugin #1652
Fix transform issue in point cloud display plugin #1652
Conversation
Closing in favor of #1655, which already targets the melodic branch. Thanks for this patch! |
I tried to replicate your reported issue, but couldn't do it. This video shows the behavior before your changes with a scan published at 1 Hz relative to the robot and visualized in a fixed world frame: rviz-bug_1hz_before-2021-09-16_13.25.08.mp4The cloud stays in place as expected without any jittering. Now, after your changes, the behavior looks like this: rviz-bug_1hz_after-2021-09-16_13.27.12.mp4Your PR introduced an erroneous transformation, leading to this behavior. I suspect, you have chosen a wrong fixed frame, a wrong frame id in your scan message or something weird in your tf tree. |
We are very sorry for the late reply. Unfortunately, the notifications of the last few weeks did not arrive, somehow... Even though your lower video sequence looks much worse with the new commit, we believe that this is the correct visualization. The point cloud is visualized fixed w.r.t. the LiDAR sensor. So the point cloud has to move in "world" frame because the robot with the sensor is moving. If you want it to look nice, as before the commit, you have to transform the point cloud from sensor frame into "world" frame as soon you receive it. Then it is not moving with the robot and remains world fixed. You have to do this anyway with moving sensors, if you want to relate different point clouds of different time stamps in your perception algorithm. e.g. creating a occupancy grid. With our proposed change your visualization looks correct no matter if you attach the camera to "world" or to "base_link" in your rviz "Global Options". It is easy to transform a PointCloud2 Message to a new frame. Simply use tf2::convert (or do_transform_cloud if you are using Python). Please let us know whether this fixes your problem. |
I don't agree. It's the purpose of rviz to handle this kind of transformation. |
Unfortunately, Rviz handles these transforms correctly only if you coincidentally choose the right frame in "Global Options". If someone would like to visualize another frame the transforms are completely wrong and there is no way to fix it (without changing rviz source code). |
@Martin-Oehler Depending on the (angular) velocities of your robot and the integration time of your lidar sensor (the time of a full rotation), you should consider to compensate for ego motion. Otherwise you have strange artefacts in your point clouds. Here you transform each point (or a block of points if you want to save CPU cycles) into a world fixed frame based on the point's timestamp. Then the point cloud would be also in your "world" frame and the visualization probably looks perfekt. Some LiDAR drivers are able to compensate for ego motion, e.g.: https://github.com/ros-drivers/velodyne |
rviz transforms a point cloud (or any other data to visualize) into the selected global frame given the TF transform of the corresponding timestamp. I don't see what is wrong with this approach. |
@rhaschke Sorry I was quite busy in the last months. But I would like to bring the attention back to this issue:
You can clearly see the problem, when you watch the synthetic example in the first video. In many scenarios it is important to transform the PC into fixed frame continuously and not only when the message arrives. I still think that this is actually the right behavior. But I understand that this Pull Request breaks the visualization for many people, which developed their visualization based on the old implementation. I found a Flag |
@Martin-Oehler Can you please help me to understand why your visualization is broken with this pull request.
Thank you very much. |
The point cloud is published in the frame of the lidar scanner.
No, the point clouds are correctly timestamped.
I haven't tested this but since the cloud is published relative to the robot, setting the fixed frame to the robot as well will result in no transform at all, so the behavior should not have changed with your patch. |
This is similar to #1631.
It is important to continuously update the point cloud's transform not only when a new message comes in. Especially in cases where the tf changes more frequently than the point cloud itself. We created a synthetic example under
src/test/point_cloud_test.py
. There is also an rviz config to generate the output in the videos below:Current Implementation:
pc-2021-08-05_14.45.06.mp4
New Implementation:
pc-2021-08-05_14.46.59.mp4
In this synthetic example the point cloud is just published once a second while the tf is changing faster (30 Hz). The new commit ensures that the transforms w.r.t. to fixed frame are regularly applied.
Current Implementation (real world example):
pc_real-2021-08-05_15.29.18.mp4
New Implementation (real world example):
pc_real-2021-08-05_15.33.03.mp4