-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing Various Cases, and Potential fix #24
base: master
Are you sure you want to change the base?
Conversation
} | ||
|
||
// If we find both positive and negative distances, the polyhedron is not convex | ||
if (positive && negative) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should there be any positive distances for a valid convex hulll?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, so the way I'm checking it is if all the other points that are a part of the convex hull, don't lie on the same side of the face it's not a convex hull. I wrote this to account for incorrect orientation of the normal, since I am testing it after each iteration, and I wasn't sure whether the normals were oriented correctly after each step or just at the last step.
Although, now that you've pointed it out, I think I should have tried checking only for the negatives first in case the normals are oriented correctly, it will be a more sound check. I will check the code to verify the normals aren't rearranged later and if it works I'll make the changes. Thanks for pointing it out
return Vector3<T>(px,py,pz); | ||
// errorNormal = Vector3<T>(std::abs(y*rhsz)+std::abs(z*rhsy),std::abs(z*rhsx)+std::abs(x*rhsz),std::abs(x*rhsy)+std::abs(y*rhsx)); | ||
// return Vector3<T>(px,py,pz); | ||
return Vector3<T>(px,py,pz).getNormalized(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is your proposed fix?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, so basically normalizing the output of getTriangleNormal function, since we were comparing the distance (unnormalized) directly to epsilon in
Line 462 in 4ef66c6
if (d>maxD) { |
and based on my understanding of the code that wasn't intended.
Also, the above line and
Lines 196 to 206 in 4ef66c6
const T D = mathutils::getSignedDistanceToPlane(m_vertexData[ pointIndex ],f.m_P); | |
if (D>0 && D*D > m_epsilonSquared*f.m_P.m_sqrNLength) { | |
if (!f.m_pointsOnPositiveSide) { | |
f.m_pointsOnPositiveSide = std::move(getIndexVectorFromPool()); | |
} | |
f.m_pointsOnPositiveSide->push_back( pointIndex ); | |
if (D > f.m_mostDistantPointDist) { | |
f.m_mostDistantPointDist = D; | |
f.m_mostDistantPoint = pointIndex; | |
} | |
return true; |
Both compare distances to find the maximum distance, in such a situation when the distance is not the Euclidean distance the comparison could select an "incorrect point" (Now I know that we had discussed that choosing the absolute maximum point doesn't make a difference in floating point arithmetic, but it's possible that due to this non-Euclidean distance we pick a point which is very close, which could lead to the errors [this is just speculation on my end] leading to those errors)
PS: I'm sorry if I wasn't able to explain this properly in the original comment. I hope it's clearer now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good find! Since normalization is somewhat expensive, you might want to check all the places this function is called. If they all need to be normalized, then this is perfect, but if not, it would be ideal to only add normalization where needed. Have you noticed any difference in speed from this change?
Normalizing is not a good idea, generally speaking. When the point cloud vectors are very big (e.g. all points' norms more than 100000000) or very small (e.g. all points' norms are less than 0.000001), then it will cause a great deal of additional numerical inaccuracy. It may help in some special cases, especially when the vector components are not too small or big, but more often that not, it only makes things more unstable and of course there is always a performance hit. |
I think with floating point, as long as you are not doing addition/subtraction, this should not lead to numerical issues? Taking sqrt lose precision, but it is independent of magnitude of the coordinate. In fact, I recall when I was taking some computational geometry class, the professor said we should always scale and translate all points to [-1, 1] to avoid catastrophic cancellation caused by, e.g. adding points with large magnitude with points near the origin. And looking at the code it seems that it does require normalization somewhere, e.g. QuickHull.hpp line 462 as mentioned above, otherwise these distances are not comparable. (sorry for the bad formatting, I am on my phone now) |
True, we're not doing addition/subtraction, so it shouldn't cause too much issues, but the very first versions of my quickhull algorithm indeed did use normalization and in my experience it always caused trouble (more failures to solve the horizon edge loop). |
Do you happen to know of any test cases from your initial work that triggered those problems we could use? Or should some be created to check behavior in such cases? |
Unfortunately, it was 10 years ago, so nope... |
@akuukka we're thinking of forking quickhull and pulling it into Manifold, to make it easier to parallelize and ensure our CI is testing it across our various platforms. I suppose this is fine given it's marked as Public Domain. Still, I'd like to give you credit - is there anything you'd appreciate we include in our Apache 2.0 license header? |
Not particularly, just go ahead. I'm happy to hear that you are finding quickhull useful. |
Based on #23, I have been experimenting with potential fixes, and have created a draft PR, so you can review some changes I have made.
@elalish and @pca006132 , you can go through the PR as well, it has some commented conditions I tried, they might give you an idea of what I explored, and the current fix I have pushed, appears to perform significantly better on the dataset