-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Skin weights must be normalized. #1352
Skin weights must be normalized. #1352
Conversation
We might make this change a bit more subtle by saying that sorting is required only when there are more than one set of weights (since it makes no difference with just 4 influences).
I expect that an asset that uses 8 influences would still look incorrect in an engine supporting only 4. Keeping the most heavy joints in the first set is probably the best possible fallback in such situation. I'm not sure that re-normalizing them in-engine would improve the result. We definitely need some samples to check these statements. /cc @UX3D-nopper |
If 8 influences are required, then this has a reason. Even a "small" weight e.g. for a rotation can have a major impact on the final pose. |
Fair point. We still need some language around normalization including the case of more than one weights set. It also would be useful to add new sample models showing that. |
@donmccurdy @McNopper How could we move this PR further?
|
As mentioned, if more weights are used and the engine can not handle it, the loading should fail an/or the main pose - without skinning - displayed. |
I'm happy with this. In that case descending order requirement is unnecessary, but we should still require uniqueness of non-zero weights across all sets, yes? |
Non-zero non-unique weights make little sense, but I don't think that engines need to care about that (unless they unpack values manually). In practice, I'd expect non-unique non-zero weights to be caused by exporters' bugs only. |
Do you mean that we shouldn't bother to include spec language about it? If so, and we're also saying descending/ascending order doesn't matter, and users should expect assets with >4 weights to simply fail in most engines, then I think all that's left for this PR to say would be:
|
My point is that it could be implementation-dependent. Say that we've got
If an engine does something like this:
then non-unique indices don't matter because joint Alternatively, an engine could do like (for whatever reason):
In such case, the final sum might not include both weights for joint So, if non-unique indices are allowed, we must mention that in the spec.
That's also worth mentioning. |
If we "
If so, the updated language for this PR could be:
|
What is the "linear" in "linear sum" supposed to indicate? How actionable is the requirement that the sum be one? Consider this situation: you are writing a program which emits glTFs. Your input model has weights given as 4-vectors of three-digit decimal numbers. You are required to output 4-vectors of normalized But it does not give weights that sum to one because of round-off error. Example:
This strategy is therefore apparently forbidden by the new text, despite being IMO reasonable (and probably very common). |
Linear means that values should be summed as is (as opposed to, e.g, quadratic sum for normal vectors).
That's interesting because, only nine three-digit decimals can be exactly represented in IEEE single-precision binary form (
As with other numerical thresholds, "equality to |
Yes, within reasonable tolerance is fine. This rule is meant to avoid situations where a vertex’s weights sum to values like 0, 0.5, or 99, in which case results in different engines may vary. |
For my own curiosity, what is a quadratic sum for normal vectors? :)
I chose 3-digit decimals just because it's easy to see the arithmetic by hand, they're not essential. The same effect occurs if you choose the nearest doubles to those weights and do float arithmetic
The error in this strategy is <= (1/255) (1/2) (number of (non-zero) weights), so you'd need a tolerance of 2/255 for UBYTE storage for one set of weights (thinking of #1381) to allow this. This is what I mean by "actionable". Sometimes the spec says "you must do this" and it really means "this is the ideal you should keep in mind". For example, it says matrices must be decomposable to TRS but since it doesn't specify which of the numerous possible function |
Just unit length:
That's a known stalled issue. We still need to specify that kind of details. |
Merging as discussed, without the requirement that weights be sorted. The spec does not currently discuss tolerance for numerical requirements, but this may be added in the future. |
Fixes #1213.
Some concerns:
JOINTS_1...N
, clients that only support 4 influences (this is nearly every engine today) will not be able to assume that JOINTS_0 is normalized. So, practically, the implementation must still be robust against weights that do not sum to 1.I'm not too worried about (2), but would say (1) is worth discussion.