-
Notifications
You must be signed in to change notification settings - Fork 515
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gltfpack: Refactor stream encoding using meshopt_encodeFilter #791
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This streamlines the code and preserves the output exactly as meshopt_encodeFilterOct does the same transformations.
This simplifies the code and keeps the output binary identical since the encoding logic is the same.
This simplifies the code and keeps the output identical since the encoding logic is the same. Note that since the stride here is 12 bytes, we can't encode directly from the source attribute data (we need to skip W), but we can encode using aliased destination & source.
This is a useful mode of operation since exp encoding shares source & destination stride, but it's valuable to make sure it works as implementation changes may accidentally break it if they read data after writing destination.
meshopt doesn't provide an encode function for snorm encoding as it's a trivial extension of scalar encoding, but here we need this in a few places so our own function simplifies the data flow.
When using SharedComponent or SharedVector, exponents compress fairly well on their own because their range is normalized in a more or less optimal way; when using Separate mode, the reduced bit count truncates mantissa which fixes some issues with input entropy, but if the values are clustered around zero then the exponent will also see a lot of variance that is generally unwarranted for precision, when the input range is known to be further away from zero. While there are ways to solve this in a more general fashion, eg by exposing min_exp as well as mode, this is probably too involved and it's simple and mostly sufficient to add a clamped mode. This is helpful when encoding values like texture coordinates, where SharedVector or SharedComponent may not have enough precision in case where a component is tiled with a high repeat value, but we know we need a limited precision around 0 so clamping the exponent works well. This API is still experimental so it's easy to rework it later if we discover a more general solution is warranted.
…lterExp Unlike previous changes, this is not *exactly* equivalent in terms of binary output. It should be the same when using `-c`, but when using `-cc` there may be cases where the shared component encoding selects an exponent that is smaller than 0 for normals or texture coordinates. This should not negatively affect compression ratio though, just produce slightly different files.
When we switched from frexp to optlog2, the behavior for negative zero changed: instead of being encoded as if it had exponent 0, it was mistakenly encoded as if it had exponent min_exp. Both encodings are equivalent; in fact, there is some leeway in encoding zeroes that we are not exploiting as they could repeat the last non-zero exponent - but this is an unexpected change and should be corrected for consistency.
This just rebuilds the JS code with an extra constant exposed, and adds a test to make sure it works.
We previously used Clamped in -c mode for normal deltas; but normal deltas may be fairly small. Normally preserving these requires more bits but if the user already opted into floating point normals it would be more reasonable to use SharedComponent to dynamically adjust to the delta range. This also fixes the odd corner case where the deltas may be erased at -c but kept at -cc.
Document clamped exponent mode in main documentation and lightly document the `mode` parameter for JS.
514779c
to
14dad04
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This change reworks stream encoding in gltfpack to use meshopt_encodeFilter* where appropriate
instead of hand-crafted implementations; this results in less redundant code and relies on the unit
tested encoders.
When using
-vtf
and-vnf
, gltfpack clamped the exponent range to reduce entropy; this is valuableespecially when using separate exponents. While it would be possible to expand the interface with
fully customizable min_exp, for now it might be easier to expand the enum. The API is technically
experimental so if we really don't like this we can revise this in the next version (or stabilize if no
issues emerge). Hence this PR adds
meshopt_EncodeExpClamped
(andClamped
to JS mode) andexpands documentation and tests accordingly.
This change was tested on a large collection of glTF files (~750 models from various sources with
some duplicates, using a 4x3 matrix "default, -vn 12, -vpf -vtf -vnf, -vpf -vtf -vnf -vn 12" x "default, -c, -cc".
When not using
f
formats, the output is binary identical. When usingf
formats withc
orcc
, the outputs do not change noticeably in size, but some models don't produce binary identical outputs to before because the exponent clamping behavior is a little different now.Performance on meshes with a lot of geometry is broadly unchanged but slightly better.