Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds higher precision spatial sorting #651

Closed
wants to merge 1 commit into from

Conversation

pm4rtx
Copy link

@pm4rtx pm4rtx commented Jan 25, 2024

Hi,

I've been working with some meshes where 10 bits of precision was not sufficient, so I added a version of spatialSortRemap with higher precision, so perhaps it could be helpful in upstream.

@zeux
Copy link
Owner

zeux commented Jan 25, 2024

Any chance you'd be able to share some meshes in question, publicly or privately, as well as the evaluation criteria for the sort order?

There's definitely an expectation that the quantization in spatialSortRemap produces the same values for points that are too close. But that previously has been considered fine as it didn't affect the actual order too greatly - as in, if the points are close enough then the order of points doesn't matter. That said, of course if the extents of the mesh are very large then maybe the quantization threshold becomes unacceptably large as well.

I don't love the idea of adding another API to solve this, however it might be good to solve the issue inside the function itself by automatically switching to a higher precision version under some condition. Additionally I'm curious if you need the full 60 bits of precision here, or if fewer bits suffice - eg if instead of 6 radix passes we could do 4 or 5, this could be good.

@zeux
Copy link
Owner

zeux commented Aug 27, 2024

I'm going to close this because I don't think adding a separate API is the right solution. I have an internal TODO item to look into this further, likely the right solution is some combination of careful increase of internal precision (with performance tests) and switching the implementation to use tiling + sorting instead of just a global sort, and all of this should actually be measured on some scene where it makes a difference.

@zeux zeux closed this Aug 27, 2024
@pm4rtx
Copy link
Author

pm4rtx commented Oct 19, 2024

Hi @zeux, I apologise for the prolonged radio silence with a reply; this PR went off my radar.

meshopt_spatialSortRemap was used as a drop-in implementation of a triangle sorting for a BVH builder implementation, as the library was already used in the same project. One of the meshes that produced identical morton codes with 32-bit precision is "roadBike". The evaluation criterion for the sorting order wasn't exact. Still, it was desirable to have the number of triangles with an identical Morton code somewhat below 64-128. That was because the bottom-up BVH building algorithm was doing a nearest-neighbour search per triangle with a window that was expanding from the original 64-128 neighbours to the boundary where Morton codes start to differ, so having something like >200-500 triangles with identical Morton codes lead to a considerable expansion of search window which impacted the performance.

I'm OK with this PR being closed as I understand my use case requiring higher bit quantisation/sorting may be well outside the scope meshopt_spatialSortRemap was initially designed for.

@zeux
Copy link
Owner

zeux commented Oct 19, 2024

Thanks, this use case makes sense. Just to make sure, I assume you are not relying on Morton codes per se when you are post-processing the output of a spatial sort, but you have some threshold of closeness that is smaller than 0.1% of the extents of the model, so the 10-bit sort doesn't suffice as you get suboptimal bounds?

To be clear, I think it would be reasonable to improve the behavior of meshopt_spatialSortRemap for dense models, I just don't want to have two different versions of the function.

@pm4rtx
Copy link
Author

pm4rtx commented Oct 20, 2024

Just to make sure, I assume you are not relying on Morton codes per se when you are post-processing the output of a spatial sort, but you have some threshold of closeness that is smaller than 0.1% of the extents of the model, so the 10-bit sort doesn't suffice as you get suboptimal bounds?

Yes, there's no reliance on particular Morton code values; it's just reliance on how large the sub-sequence of identical Morton codes is. For example, let's imagine there's a sub-sequence of Morton codes
M=[... 2, 2, 2, 3, 3, 4, 5, 5, 5, 5, 5, 5, 6 ...]

and for each triangle, there's a search radius R=3, so for the triangle i with Morton code M[i] == 4
the following candidate triangles are considered [M[i-R]; M[i+R]]:
M=[... 1, 1, 2, 3, 3, 4, 5, 5, 5, 5, 5, 5, 6 ...]

Since M[i+3] == 5, and there are other other triangles with the Morton code ==5, so the search window expands to
M=[... 1, 1, 2, 3, 3, 4, 5, 5, 5, 5, 5, 5, 6 ...]
because all the Morton codes are the same (which means they are part of the same spatial cluster), there's no reason to exclude some of them from the search, so the search window could expand significantly for dense meshes. This could be viewed as

some threshold of closeness that is smaller than 0.1%

because the cluster of identical Morton codes is too large and there's a desire to make it smaller.

To be clear, I think it would be reasonable to improve the behavior of meshopt_spatialSortRemap for dense models, I just don't want to have two different versions of the function.

Thank you! I did understand that from your previous reply. I just wanted to emphasize it's not a high-priority request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants