-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test_kfpcs_ia fails due to missing file(s) #1227
Comments
They got lost somehow during squashing, commting, pulling chaos. I have them here, what is the easiest way to give them to you? |
Submit a pull request, of course :) Are they heavy? If this is the case perhaps it's better to reuse some of the test files already available in the repo. |
of course :). No they are not heavy, both are around 20kb ^^ |
done! |
Travis did not reach the test, but on my computer it fails:
|
Waaaaait, this looks like the test is also based on an old version, what the hell did I do with the squashing. I have to verify this, but it could be, that the test also verifies the result of the registration and because it is based on random sampling, it can happen that the registration fails (although I am quite sure that the error rate was somewhere arount 1%, so you were anyway quite unlucky :-), try to rerun it). Because this has nothing to do with the applicability of the algorithm, I removed the verifaction of the result in the test in a later version. |
Okay it is as expected, the two lines EXPECT_NEAR (angle3d, 0.f, 0.1745f); // 10° are not commented anymore. I could do that, but the side effect is, that the test then only checks if the aligned point cloud has the same amount of points as the input one, which is already garanteed after initialization. On the other hand, it does not make sense to check the result, because as we have just seen, it can randomly be wrong. Any suggestions? |
On my machine the test consistently fails. I ran it about 20 times and it didn't succeed any single time. The difference in angle was always at least 1.5, and sometimes as large as 3.2. |
Hm, only explanation from my side is, that I used a wrong (e.g. inversed) transformation matrix as groundtruth. I will verify that. But in any case: does it make sense to restrict the test success of a random based method in such a way, that it can fail although it ran as it is supposed to? In that case I would correct the groundtruth but also remove the 'EXPECT_NEAR' parts... |
Good question. You may check how this is handled in other randomized algorithms in PCL. One obvious option would be to allow several runs. If you give guarantee that it fails only 1% of the time, then we are 99.99% confident that it will succeed at least once in two runs. |
Okay found the problem, to summarize:
I will do a new pull request as soon as I finished my testing (100 runs on my computer). Can you check it then again on your machine? |
Sure, just post a link to the feature branch on your fork. |
I am not very familiar with git (as you definitely have found out over the last half year), not sure if this link is what you need: https://github.com/theilerp/pcl/tree/theilerp-kfpcs-testfiles Otherwise I would just do another pull request. |
Yes, this is it. I will give it a try during the day. |
Takes forever (up to 23 seconds) and fails in the end, consistently.
|
This is definitely veeery strange. Why the hell is it working for me on my machine??? The 23seconds is normal, in case you do not use multi-threading with openmp, otherwise it should be much faster. |
Stupid me, because I didn't use the same groundtruth ^^. I found the error, the groundtruth had a typing error (-0.999 instead of 0.999). Next commit should hopefully work, I am very sorry for the inconvenience |
Works on my side now. But don't forget to remove the commit that comments out the angle check. Also can we skip next iterations after successful alignment? Tests often don't have enough time to run on Travis, so adding another 20 seconds is not so good. |
Done and pull request --> #1229 |
@theilerp Could you please have a look? The test uses some files that are not present in the repository.
The text was updated successfully, but these errors were encountered: