Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kinematic ICP Threshold #15

Open
wants to merge 16 commits into
base: main
Choose a base branch
from
Open

Conversation

tizianoGuadagnino
Copy link
Collaborator

Motivation

The adaptive threshold of KISS-ICP does not translate well to Kinematic, as in the latter case, we typically have an excellent initial guess from the robot odometry. This implies that, in principle, we can be more conservative with the threshold (lower value), making the system more robust in the data association phase.

Concept of what I have done

What I tried to do in this PR, is to design a new correspondence threshold scheme, CorrespondenceThreshold that takes into account two things:

  1. Our (not more as) loved VoxelHashMap has a specific point resolution, given by voxel_size and max_point_per_voxel. We should consider this resolution into our threshold, as it should be impossible for the system to go below this value. Why is that? If we look at the VoxelHashMap, we will add a point to the map if it is at a minimum distance from all the other points and is at least equal to the map resolution $d$. That means that our corresponding point search matches not simply the closest point but the closest measured point between a ball of radius $d$. I can add more details if needed. This fact is considered by the class's map_discretization_error_ parameter.
  2. We still need to consider how much our robot odometry deviates from the LiDAR odometry estimate; this is done in a very similar fashion as KISS. However, by removing the min_motion_threshold and initial_threshold as now, we can expect our motion prediction to be much more accurate since the beginning. Plus, we have a minimum value given by default by map_discretization_error_.

If $\sigma_{\text{map}}$ is map_discretization_error_ and $\sigma_{\text{odom}}$ is the average deviation between robot and LiDAR odometry in point space (as in KISS), our new threshold $\tau$ will be:

$$\tau = 3 (\sigma_{map} + \sigma_{odom})$$

Add configuration parameters for this module

Finally, I decided to give the user the option to fix their threshold if they are willing to tune the parameter for their scenario. This is done with the new parameters use_adaptive_threshold, which enables or disables the adaptation of the threshold, and fixed_threshold, which sets the threshold value in case use_adaptive_threshold=false.

tizianoGuadagnino and others added 10 commits October 16, 2024 15:10
Make *USE_SYSTEM_SOPHUS* consistent with the rest of the options.
More accurate statement in the readme, the system does not need to follow a unicycle motion model, just the pose correction needs to do that, which applies to much more robotics platforms.
* Add black and isort

* Format

* use black for isort
* We didnt have proper flagging in the online node

* Clean launch common args

* Fix formatting python
Copy link
Member

@benemer benemer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pushing this. After discussing it for a while, we should definitely address it. Using the same threshold always felt wrong.

I just found a few minor things.

Before merging, we should thoroughly test the impact on performance. Do you already know how it looks for our test cases and also quantitatively for the sequences we recorded?

Besides that, we can also discuss how to actually compose that two kinds of error terms. Right now, we sum them, and this, of course, works, but how about, for example, taking the max of both:

$$\tau = 3 \max(\sigma_{map}, \sigma_{odom})$$

Just as an idea, this guarantees that we will compensate at least for the map discretization error and increase it if our odometry is far off.

Also, I'm not sure if the multiplication by 3 is still needed. Have you tried removing it?

Lastly, one comment on the map discretization error. If you, for example, assume three map points on a plane with a map resolution of $$d$$ apart, the maximum discretization error of a point in the center will be

$$\sigma_{map}=\frac{d}{\sqrt{3}}$$

It is based on the idea that the three map points form an equilateral triangle. But maybe that's also too much, and we can just go with the map resolution to include some additional safety margin.

@tizianoGuadagnino
Copy link
Collaborator Author

Thanks for pushing this. After discussing it for a while, we should definitely address it. Using the same threshold always felt wrong.

I just found a few minor things.

Before merging, we should thoroughly test the impact on performance. Do you already know how it looks for our test cases and also quantitatively for the sequences we recorded?

Besides that, we can also discuss how to actually compose that two kinds of error terms. Right now, we sum them, and this, of course, works, but how about, for example, taking the max of both:

τ = 3 max ( σ m a p , σ o d o m )

Just as an idea, this guarantees that we will compensate at least for the map discretization error and increase it if our odometry is far off.

Also, I'm not sure if the multiplication by 3 is still needed. Have you tried removing it?

Lastly, one comment on the map discretization error. If you, for example, assume three map points on a plane with a map resolution of d apart, the maximum discretization error of a point in the center will be

σ m a p = d 3

It is based on the idea that the three map points form an equilateral triangle. But maybe that's also too much, and we can just go with the map resolution to include some additional safety margin.

I tried different things before coming up with this solution. I didn't directly try the max, but it was very close. If the error is not above the map_discretization_error_,, I do not update the threshold at all, meaning that the threshold can never decrease below map_discretization_error_. This resulted in a too small threshold, and the estimates were broken at the first turn. Your version will be a 3* that, so maybe that works; we can test it out, but I can tell you that removing the 3x or reducing the values too much will degrade performances according to what I saw until now. As you said, we should use our data to find out what is the better approach, I guess many things will work, but we can at least compare trajectories.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants