Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for decompressing to quantized (fixed-point) vertices #18

Closed
kelleyma49 opened this issue Jan 14, 2017 · 4 comments
Closed

Support for decompressing to quantized (fixed-point) vertices #18

kelleyma49 opened this issue Jan 14, 2017 · 4 comments

Comments

@kelleyma49
Copy link

kelleyma49 commented Jan 14, 2017

As far as I can tell, draco doesn't support decompression into quantized formats (see https://github.com/KhronosGroup/glTF/tree/master/extensions/Vendor/WEB3D_quantized_attributes for more information.)

This would be great to have in order to minimize data transformation.

@bhouston
Copy link

So the idea is to not transform the quantized vertex positions and leave them quantized into the attribute buffers and then it is only in the glsl code that you transform them into floating point positions? This would allow one to have uint16 buffers rather than fp32 buffers for things like position attributes I guess?

@kelleyma49
Copy link
Author

kelleyma49 commented Jan 14, 2017

Correct. The transformation occurs in the vertex shader. Usually you transform the position attribute by a scale and offset passed in via shader variables.

You can also quantize texture coordinates and normals (see http://cesiumjs.org/2015/05/18/Vertex-Compression/ for an example.)

@ondys
Copy link
Collaborator

ondys commented Jan 14, 2017

Draco supports compression of integer valued attributes, so it is possible to pre-quantize any attribute values before they are compressed by our library. At this moment, this would have to be done through our C++ API though. One drawback of this approach would be that the draco file would have no information about the quantization settings so these would have to be stored separately.

Skipping dequantization in our decoder would be certainly a more straightforward approach and we have been actually thinking about it already for exactly the use cases mentioned above. I've added it to our internal issue tracker and we will let you know when we have any update on this.

@ondys
Copy link
Collaborator

ondys commented Oct 5, 2017

Support to skip dequantization in our decoder has been added in 1.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants