Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Robust mean and covariance estimate #1407

Conversation

gcasey
Copy link
Contributor

@gcasey gcasey commented Oct 29, 2015

This addresses robustness in covariance estimation that affects normal calculation and RANSAC plane fitting.

Casey Goodlett added 2 commits June 4, 2014 10:19
The current implementation uses a naive one-pass algorithm
that is subject to significant numeric error.

http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Na.C3.AFve_algorithm

This occurs especially when the variance is small relative to the mean of the data.
For example, a thin plane far from the origin.

The errors has been observed to impact.

1) Normal calculation filter
2) RANSAC plane segmentation

Possible solutions include a two-pass algorithm or a one-pass algorithm using
a recurrence relation.

The two-pass algorithm was chosen for two reasons

A) Simplicity of implementation and verification
B) Avoid this performance penalty of division within the per-point loop
…nce-estimate

* gh/master: (166 commits)
  Fixes for issue PointCloudLibrary#924
  Fix setSize () in PCL visualizer
  PCL formula is in the official homebrew repo
  CMake pkg_check_modules sets GLEW_INCLUDEDIR
  Fix for Mac OS X
  Changes necessary for lccp pull request PointCloudLibrary#718
  Change to visualization to make things a bit nicer - mainly removed broken opacity
  Updated supervoxel examples with normals and better description of the camera transform
  Removed public interface for modifying neighbors of OctreePointcloudAdjacencyContainer. Neighbors can now only be added/removed from within OctreePointcloudAdjacency. This does *not* mean neighbors can't be modified, just that the list of neighbors itself can't be modified.
  Fix for problem where leaves could get lost, and warning would show up. Now we use leaf indices directly for seeding, instead of doing a look up Changed OctreePointcloud Adjacency to use std::list instead of std::set, added comparator to SupervoxelHelper so that the internal set of voxels owned by a helper is sorted by index instead of memory location Added brief comment explaining comparator
  Fix inconsistency between method declaration and implementation that resulted in build failure
  Remove Boost Chrono dependency
  Don't normalize path if function is not in Boost
  Disable precompilation of PPFRegistration
  Fix for issue PointCloudLibrary#758
  Moved pcl::PPFHashMapSearch::setInputFeatureCloud() and pcl::PPFHashMapSearch::nearestNeighborSearch() implementation from ppf_registration.hpp to ppf_registration.cpp since they belong to a not templated class
  Bump PCL version to 1.8.0
  Fixes PointCloudLibrary#896.
  Add a changelist for 1.7.2
  Set PCL_ONLY_CORE_POINT_TYPES on Windows (Closes PointCloudLibrary#833)
  ...

Conflicts:
	common/include/pcl/common/impl/centroid.hpp
@gcasey
Copy link
Contributor Author

gcasey commented Oct 29, 2015

It looks like the failing tests are likely caused by the updated covariance calculation. I'm not sure how the original values were derived so I will update the expected value.

Casey Goodlett added 4 commits October 30, 2015 16:34
If the covariance estimator is using the correct version it should
be robust to a mean shift in the points and provide the same answer.

Note the test of plane position is commented in the translated version
because the full plane equation does shift with a translation, but the
orientation should remain the same.
…ariance-estimate

* origin/master: (355 commits)
  Add RealSense viewer
  Add RealSense grabber
  Add CMake module for RealSense SDK
  Store required sample size inside SampleConsensusModel classes
  Change davidSDK repository link
  Fix find_VTK (Fix Pull PointCloudLibrary#1368)
  Remove PXCGrabber, only leave header with a depreciation note
  Add search hints and fix a typo in find_openni2 macro
  Fix find_* macros for grabbers in PCLConfig.cmake
  New interface for LCCP
  Remove `using SampleConsensusModel<PointT>::isModelValid" in classes then override this function
  Fix super-voxel clustering tutorial alignment issues on 32 bits OS
  Compare pointers with 0 instead of false
  Only set QVTK_FOUND if it's actually found
  Fix root path to retrieve Boost files
  Fix root path to retrieve VTK6 files
  ignore polygon faces properly when loading ply into a pointcloud instead of PolygonMesh
  about mtlreader error
  Fix delay loading flag for OpenMP
  Modeler should be disabled when QT is not present
  ...
The increased precision of normal calculation has an impact on
covariance based sampling.  Update the expected values based on the
two-pass covariance calculation method.
@gcasey
Copy link
Contributor Author

gcasey commented Nov 2, 2015

This change impacts:

  • Point normal calculation
  • Plane segmentation fitting
  • Most point features that rely on normals
  • Covariance sampling
  • ICP registration with nomral based correspondence rejection

@gcasey
Copy link
Contributor Author

gcasey commented Nov 2, 2015

@VictorLamoine @taketwo @jspricke could someone please take a look at this change? In my experience this significantly improves robustness when point clouds are not near the world origin for normal calculation and RANSAC plane fitting (and anything those depend on).

@jspricke
Copy link
Member

jspricke commented Nov 3, 2015

Copying @gedikli comment from #1411 over here:

There is a reason why we did the single pass Mean and Covar estimation -> speed.
If the double pass is required, use those methods directly or call the single-pass with Eigen::Matrix3d matrices, which uses double as accumulator. That should also increase the accuracy.

@jspricke
Copy link
Member

jspricke commented Nov 3, 2015

Maybe we should provide both ways and add a comment/flag in the API?

@gcasey
Copy link
Contributor Author

gcasey commented Nov 3, 2015

I recognize the computation time concerns but based on the current evidence, I still think this is a good change.

I am not sure that it will be easy to provide both methods as a choice in the API. This computation fundamental to several functions (normals, features, plane fitting), and it seems hard to manage that choice in the API to pass through the appropriate filters. The only obvious way to me is to provide a CMake variable or similar. That seems to significantly increase the complexity of using the toolkit to understand the robustness/speed tradeoff.

Before considering giving users an option, I would also like to see evidence that the one-pass version is significantly faster. I'm skeptical that the speed benefit is of enough benefit to outweigh the cost of the robustness.

There have been a variety of bug reports on this topic over the last couple years so there are obviously many users encountering the issue. To me this indicates we should prefer improving the robustness even at the expense of computation time. I am in favor or preferring robust results over fast results that may be incorrect.

I'm skeptical that using double precision in a single pass will be faster or that implementing a robust one-pass algorithm will be better since it brings a division inside the inner loop, but I have not tried either of these. If someone else would like to implement that I am happy to compare the results.

I would be happy to run a before/after performance benchmark on this change if someone can point me to an existing benchmark tool or test.

@junyang224
Copy link

Hello gcasey:

I replaced the file "centroid.hpp" with your updated version, but the result unchanged. The following is my code, is it the right way to compute the normal?

pcl::PointCloud::Ptr cloud (new pcl::PointCloud ());

const Eigen::Vector3f translate (15.0, 0.0, 0.0);
const Eigen::Quaternionf no_rotation (0, 0, 0, 0);
pcl::transformPointCloud(*cloud, *cloud, translate, no_rotation);

pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> ne;
ne.setInputCloud (cloud);

pcl::search::KdTreepcl::PointXYZ::Ptr tree (new pcl::search::KdTreepcl::PointXYZ ());
ne.setSearchMethod (tree);

pcl::PointCloudpcl::Normal::Ptr cloud_normals (new pcl::PointCloudpcl::Normal);
ne.setKSearch(10);
ne.compute (*cloud_normals);

@gcasey
Copy link
Contributor Author

gcasey commented Nov 3, 2015

@CSI4133 I dont see where the actual values for the point cloud are read or set. This fix will affect points clouds that are far from the origin (e.g. at least in the 100-1000 magnitude range )

@junyang224
Copy link

@gcasey I read the pcd file from my disk, and tested the result before and after the translation on old and updated version of "centroid.hpp". The result still changed after the translation, but I'm not sure whether I used it in the right way. You only modified the file "common/include/pcl/common/impl/centroid.hpp", right? But even I deleted this file, the code can still be compiled and got the result (exact same as the old result).

The file "common/include/pcl/common/impl/centroid.hpp" seems to be not used, so I am thinking whether I used it correctly.

@junyang224
Copy link

@gcasey I just realized that you were using "computePointNormal" to compute each single point normal instead of all of them by "pcl::Feature<PointInT, PointOutT>::compute (PointCloudOut &output)" in "feature.hpp". I followed you and the test result is right (unchanged after translation and exact same curvature value after rotation). I guess it can be a big problem to use feature method to compute normals, and I also assume that even without the updated version of file "centroid.hpp", the difference of normals after translation should not be that bad.

@loganbyers
Copy link

For what it's worth, I implemented this new centroid.hpp file with the two-pass covariance computation and found that it was not effective enough for my clouds, as I still had overflow errors (presumably) and bad estimation of normals. I "solved" this by increasing the precision to double for the covariance_matrix_ and xyz_centroid_ member variables of NormalEstimation in normal_3d.h. This relies on casting these two members back to float when passing to solvePlaneParameters to get the actual normals. I haven't done any tests to see what effects changing these two members will have on the rest of the library, but these are private members so they probably aren't being accessed outside of the class.

@SergioRAgostinho
Copy link
Member

SergioRAgostinho commented Aug 22, 2016

Aiiiighhhhhht! Time to pick up this conversation again. So I do agree with what @gedikli commented

There is a reason why we did the single pass Mean and Covar estimation -> speed.
If the double pass is required, use those methods directly or call the single-pass with Eigen::Matrix3d matrices, which uses double as accumulator. That should also increase the accuracy.

There are already methods which compute the centroid and the covar separately effectively doing it in a two pass procedure, so computeMeanAndCovarianceMatrix should be left as it is. In light of the comment made by @jspricke , I would change the its name to computeMeanAndCovarianceMatrixSinglePass and would create a wrapper like

template <typename PointT, typename Scalar> inline unsigned int
pcl::computeMeanAndCovarianceMatrix (const pcl::PointCloud<PointT> &cloud,
                                      Eigen::Matrix<Scalar, 3, 3> &covariance_matrix,
                                      Eigen::Matrix<Scalar, 4, 1> &centroid, 
                                      bool double_pass = false)
{
    if (double_pass)
    {
      pcl::compute3DCentroid(cloud, centroid);
      pcl::computeCovarianceMatrixNormalized (cloud, centroid, covariance_matrix);
    }
    else
      pcl::computeMeanAndCovarianceMatrixSinglePass (cloud, covariance_matrix, centroid);
}

probably even create an inlined wrapper named computeMeanAndCovarianceMatrixDoublePass just for consistency.
Then for all algorithms which are currently being affected by this precision issue, trigger this double_pass flag, which might require exposing it as a flag to their public interfaces.

@SergioRAgostinho SergioRAgostinho added the needs: author reply Specify why not closed/merged yet label Aug 22, 2016
@SergioRAgostinho SergioRAgostinho added this to the pcl-1.9.0 milestone Aug 22, 2016
@jspricke
Copy link
Member

+1 for providing both algorithms and having a wrapper.

@stale
Copy link

stale bot commented Dec 14, 2017

This pull request has been automatically marked as stale because it hasn't had
any activity in the past 60 days. Commenting or adding a new commit to the
pull request will revert this.

Come back whenever you have time. We look forward to your contribution.

@stale
Copy link

stale bot commented Mar 26, 2018

This pull request has been automatically marked as stale because it hasn't had
any activity in the past 60 days. Commenting or adding a new commit to the
pull request will revert this.

Come back whenever you have time. We look forward to your contribution.

@mattforestyoung
Copy link

@loganbyers sorry to exhume an old issue, and this might be a long shot... But are you able to share the code you used to achieve this?

I "solved" this by increasing the precision to double for the covariance_matrix_ and xyz_centroid_ member variables of NormalEstimation in normal_3d.h. This relies on casting these two members back to float when passing to solvePlaneParameters to get the actual normals. I haven't done any tests to see what effects changing these two members will have on the rest of the library, but these are private members so they probably aren't being accessed outside of the class.

@stale stale bot removed the status: stale label Aug 30, 2018
@stale
Copy link

stale bot commented Oct 29, 2018

This pull request has been automatically marked as stale because it hasn't had
any activity in the past 60 days. Commenting or adding a new commit to the
pull request will revert this.

Come back whenever you have time. We look forward to your contribution.

@stale stale bot added the status: stale label Oct 29, 2018
@stale
Copy link

stale bot commented Feb 21, 2020

This pull request has been automatically marked as stale because it hasn't had
any activity in the past 60 days. Commenting or adding a new commit to the
pull request will revert this.

Come back whenever you have time. We look forward to your contribution.

@mvieth
Copy link
Member

mvieth commented Nov 20, 2021

Superseded by pull request #4983

@mvieth mvieth closed this Nov 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs: author reply Specify why not closed/merged yet status: stale
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants