-
Notifications
You must be signed in to change notification settings - Fork 316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improvement for VoxelDownsample #347
Conversation
Fantastic I love this Is it too much to ask for a quick and dirty benchmark??? Just to log in this PR what is the gain of adding this |
I love it <3, as nacho said if you can just post some results for some basic benchmarks (Mulran, NCLT etc) that will be fantastic |
@nachovizzo @benemer on Sejong 02 the absolute error dropped by about 500 meters...I am a bit scared... |
I am, in general, a bit surprised the results change so much...the downsampling itself should be deterministic, and this part is not multi-threaded, right? |
This should be an explaination for the change in the errors: each frame is voxelized twice. With the "new approach", during the first voxelization, we select again the point in order but we also add them in the output vector in the same order, because we avoid the loop over the map. For this reason, also in the second voxelization we have an order in the way in which we select the first point for each voxel, that is deterministic. This is shown in the same frame of a kitti sequence, this time using the new approach: Now, the fact that the structure of the voxelized cloud influences the error so much is still surprising but not in the realm of black magic. |
This is amazing. We finally found the reason why the double downsampling broke the scan pattern. Plus, we can see that the more regular pattern does not necessarily harm the performance. The "worse" results on KITTI are fine for me, because the KITTI GT is not good, especially for relative metrics. Do you guys think this more regular-looking scan pattern can cause any issues? |
I am running experiments on the Helipr data now, so let's see how much it changes there! |
@l00p3, thank you a lot for pushing on this. A side comment: Would you be able to run all the experiments (in case you haven't already) without multithreading? That would tell us the speedup in the system besides your hardware, as they are not minerally correlated. Amazing and super interesting results btw |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would actually like to have this in the system. But as I don't see any particular evidence that this improves/degrades 100% of the cases, I suggest we add a config flag regular_downsampling
that is False
by default, and we maintain both voxelization strategies until we are 100% convinced than one is better than the other.
Seems to me that for "regular" pattern-like LiDARs, this PR works better. But for non repetitive, patterns such as livox or BPearls, the former voxelization strategy is better.
If you don't want to do the whole engineering, let me know (and give me access to your fork ) so I can take it from here and do the config magic
You should now have access to the fork, probably better if you manage to do that so I'll not mess up with the code. |
@l00p3 I think at this point, I think trying on a single core is unnecessary. We will probably work on #362 to renovate the |
The function VoxelDownsample in Preprocessing.cpp can be done with just one cycle. Instead of creating the hash_map and then extracting the points from them, we can just fill the vector of points while we are checking if voxels are occupied or not. Also we don't need an hashmap ad store the points inside with this, a set is enough.
Furthermore, probably a const voxel_size parameter is better.