-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nanoFLANN implementation for KdTree #84
Conversation
To use this implementation with pcl::search::KdTree #81 needs to be merged. Adding the pre compilation to PCL_INSTANTIATE would be nice, but it's not a requirement (just include pcl/search/impl/kdtree.hpp) |
Hi fran6co, Was is in the latest version of FLANN that you found the memory leaks? If so, could you point them to me, so I can fix them ? Thanks, |
I'm interested in this discussion too. As far as I understood, nanoflann On 17 May 2013 06:07, Marius Muja notifications@github.com wrote:
|
FLANN contains two distinct kd-tree implementations: one is a classic kd-tree that performs exact search and is efficient for low dimensional points and the other is a randomized k-d forest which performs approximate nearest neighbour search and is optimized for high dimensional features. nanoFLANN contains the first one, so it would not be suitable for matching high dimensional features. |
Yes, my point was more whether this should somehow be signalled in the PCL On 17 May 2013 09:33, Marius Muja notifications@github.com wrote:
|
@mariusmuja I don't know where the memory leaks exactly are, I'm using the pcl library without debug symbols. But the XCode inspectors shows some heavy leaking when using KdTreeFLANN that go away when using the KdTreeNanoFLANN. I'll try to get more info about it and post it on the libflann repository. |
Francisco, thanks a lot for your input! We are really eager to get these fixed! |
Hi @fran6co, just a quick note: the performance of the kdtrees should strongly depend on the datasets you test it with. It would be great if you could test it on different sets (kinect, trimble data, etc). You'll find a lot of different dataset here: http://svn.pointclouds.org/data/ |
@mariusmuja I found the memory leak but it was fixed by a newer version (I was using 1.8.2) in this commit. |
What is the status here? Julius, can you take over and coordinate please? |
@jkammerl I had some spare time and I did run a benchmark. Found this old ticket with some benchmarking code and a sample cloud here.
The results are very similar, but I'm not sure if I'm configuring nanoflann the correct way. One other thing that surprised me is that the original benchmark in 2011 for the PCL wrapper was 0.58 for building and 9.6 for searching. Now there is one digit difference! What hardware did you use @mariusmuja ? |
I think 35|25 % increases in build|search times is quite drastic! According to the author of nanoflann, there should have been up to 50 % decreases in build|search times for a 3D data set of this size. Did FLANN just get a whole lot better over the last couple of versions? |
I'm sure it's something I did wrong. The nanoflann library recommends to create an adapter to whatever data structure you use. http://nanoflann.googlecode.com/svn/trunk/examples/pointcloud_adaptor_example.cpp Here is an example for a pointcloud adapter, I wanted to keep simple and I'm using the matrix transformation. |
I recompiled this branch using the cmake configuration and I got this numbers:
A bit better, but still not good enough. I think that using the adapter should speed up the building. |
Hi, @mariusmuja , do you think it makes sense to pursue this and include nanoflann in PCL? By looking at their website, the improvements seem to be mostly compiler-related, and not necessarily due to different/better algorithms. |
@aichim I got around to do a new version with the adapter, the building time improved a lot and it takes around the same time as the normal FLANN kdtree. I don't see the perfomance gains that the nanoflann projects claims, maybe I'm doing something wrong with the optimizations. The major improvement I see is with portability (for example ARM devices wouldn't need to build libflann there).
|
I had a second look at the nanoFlann page, it seems that the big performance gains come for big pointclouds (10^6 or bigger). I'm using a cloud of size 600k that is just around the size where FLANN and nanoFLANN are the same, I'll check if I can test it with bigger clouds. |
@fran6co what should we do with this issue? |
@rbrusu, I think nanoFlann is a good alternative to FLANN for embedded systems or complex dev environments where depencies should be kept at a minimum. |
According to the posted performance results, I don't see any significant performance gain using nanoFlann. Also given that flann has an active and supportive community I am not sure if it makes sense to switch to nanoflann. |
Great investigation though! |
I didn't try for bigger clouds, I think we are going to see major gains there as the graphs in the nanoflann homepage show. I don't have the time to test it at the moment though. |
Thanks guys! I'll close the issue for now then. |
Now that FLANN has been abandoned while nanoflann is actively maintained, it may be worth giving this a second look. |
I made a port of https://code.google.com/p/nanoflann/ for PCL.
Not sure how to add the nanoflann library as a dependency, I copied it on the impl folder as it is just a hpp file. Not sure if it's the proper way to do that.
I didn't do any formal benchmark on it still, the author says that it's faster than the original FLANN but I don't see much difference in my code.
My rationale of doing this port was: