Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix transform issue in point cloud display plugin #1652

Closed

Conversation

UniBwTAS
Copy link
Contributor

@UniBwTAS UniBwTAS commented Aug 5, 2021

This is similar to #1631.
It is important to continuously update the point cloud's transform not only when a new message comes in. Especially in cases where the tf changes more frequently than the point cloud itself. We created a synthetic example under src/test/point_cloud_test.py . There is also an rviz config to generate the output in the videos below:

Current Implementation:

pc-2021-08-05_14.45.06.mp4

New Implementation:

pc-2021-08-05_14.46.59.mp4

In this synthetic example the point cloud is just published once a second while the tf is changing faster (30 Hz). The new commit ensures that the transforms w.r.t. to fixed frame are regularly applied.

Current Implementation (real world example):

pc_real-2021-08-05_15.29.18.mp4

New Implementation (real world example):

pc_real-2021-08-05_15.33.03.mp4

@rhaschke
Copy link
Contributor

Closing in favor of #1655, which already targets the melodic branch. Thanks for this patch!

@rhaschke rhaschke closed this Aug 20, 2021
@Martin-Oehler
Copy link

I tried to replicate your reported issue, but couldn't do it. This video shows the behavior before your changes with a scan published at 1 Hz relative to the robot and visualized in a fixed world frame:

rviz-bug_1hz_before-2021-09-16_13.25.08.mp4

The cloud stays in place as expected without any jittering.

Now, after your changes, the behavior looks like this:

rviz-bug_1hz_after-2021-09-16_13.27.12.mp4

Your PR introduced an erroneous transformation, leading to this behavior.

I suspect, you have chosen a wrong fixed frame, a wrong frame id in your scan message or something weird in your tf tree.

@UniBwTAS
Copy link
Contributor Author

UniBwTAS commented Sep 27, 2021

@Martin-Oehler

We are very sorry for the late reply. Unfortunately, the notifications of the last few weeks did not arrive, somehow...

Even though your lower video sequence looks much worse with the new commit, we believe that this is the correct visualization. The point cloud is visualized fixed w.r.t. the LiDAR sensor. So the point cloud has to move in "world" frame because the robot with the sensor is moving.

If you want it to look nice, as before the commit, you have to transform the point cloud from sensor frame into "world" frame as soon you receive it. Then it is not moving with the robot and remains world fixed. You have to do this anyway with moving sensors, if you want to relate different point clouds of different time stamps in your perception algorithm. e.g. creating a occupancy grid. With our proposed change your visualization looks correct no matter if you attach the camera to "world" or to "base_link" in your rviz "Global Options".

It is easy to transform a PointCloud2 Message to a new frame. Simply use tf2::convert (or do_transform_cloud if you are using Python).

Please let us know whether this fixes your problem.

@rhaschke
Copy link
Contributor

I don't agree. It's the purpose of rviz to handle this kind of transformation.

@AndreasR30
Copy link
Contributor

AndreasR30 commented Sep 27, 2021

Unfortunately, Rviz handles these transforms correctly only if you coincidentally choose the right frame in "Global Options". If someone would like to visualize another frame the transforms are completely wrong and there is no way to fix it (without changing rviz source code).

@AndreasR30
Copy link
Contributor

@Martin-Oehler Depending on the (angular) velocities of your robot and the integration time of your lidar sensor (the time of a full rotation), you should consider to compensate for ego motion. Otherwise you have strange artefacts in your point clouds. Here you transform each point (or a block of points if you want to save CPU cycles) into a world fixed frame based on the point's timestamp.

Then the point cloud would be also in your "world" frame and the visualization probably looks perfekt. Some LiDAR drivers are able to compensate for ego motion, e.g.: https://github.com/ros-drivers/velodyne

@rhaschke
Copy link
Contributor

Unfortunately, Rviz handles these transforms correctly only if you coincidentally choose the right frame in "Global Options".

rviz transforms a point cloud (or any other data to visualize) into the selected global frame given the TF transform of the corresponding timestamp. I don't see what is wrong with this approach.

@AndreasR30
Copy link
Contributor

@rhaschke Sorry I was quite busy in the last months. But I would like to bring the attention back to this issue:

rviz transforms a point cloud (or any other data to visualize) into the selected global frame given the TF transform of the corresponding timestamp. I don't see what is wrong with this approach.

You can clearly see the problem, when you watch the synthetic example in the first video. In many scenarios it is important to transform the PC into fixed frame continuously and not only when the message arrives. I still think that this is actually the right behavior. But I understand that this Pull Request breaks the visualization for many people, which developed their visualization based on the old implementation.

I found a Flag frame_locked in the Marker Message message, which activates continuous transformation for Markers, analog to this Pull Request. Since it makes no sense to change the PointCloud2 Message to add this flag, I would suggest to add a option in the PointCloud2 Plugin (and maybe other Plugins, like Odometry Plugin), which enables continuous transform. By default it sticks to the old behavior. What would you think about it?

@AndreasR30
Copy link
Contributor

@Martin-Oehler Can you please help me to understand why your visualization is broken with this pull request.

  1. In which frame is your PointCloud2 Message? Is it 'world'?
  2. Could it be possible that the time stamps of your PC2 message are wrong or always the same, so that it does not use the right TF?
  3. Does your visualization still look good when you choose your robot's frame (probablybase_link) as 'Fixed Frame' in Rviz?

Thank you very much.

@Martin-Oehler
Copy link

In which frame is your PointCloud2 Message? Is it 'world'?

The point cloud is published in the frame of the lidar scanner.

Could it be possible that the time stamps of your PC2 message are wrong or always the same, so that it does not use the right TF?

No, the point clouds are correctly timestamped.

Does your visualization still look good when you choose your robot's frame (probably base_link) as 'Fixed Frame' in Rviz?

I haven't tested this but since the cloud is published relative to the robot, setting the fixed frame to the robot as well will result in no transform at all, so the behavior should not have changed with your patch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants