Fix multiple issues with UV compression #84159
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes: #84089
The root of #84089 comes from how we compress UVs. When using the compressed format, we have to scale UVs into the 0-1 range. To cut down on data that needs to be passed into the shader, we always scale around 0. Which means that to take an arbitrary range and convert it into 0-1, we need to divide by 2 * the maximum absolute value and add 0.5.
This compresses 0-1 UVs into 0.5-1 or, alternatively -2-2 uvs into 0-1.
This compression makes sense and is cheap, but it comes with a fatal flaw. The most common use case for uvs is the 0-1 range. When you convert 0-1 into 0.5-1 and convert to 16-bit, you end up with a range of 0.49992 - 1, which uncompressed in the shader becomes -0.0002 - 1.
This is fine when the UV is only used for reading textures. But, the shader in #84089 assumes that UV is always positive (which is a fair assumption) and it ends up with undefined behaviour when UVs are surprisingly not positive. I am worried that this will be a common case among users.
We have two possible solutions:
I opted for option 2. The shader only runs the unscaling code if the
uv_scale != vec4(0.0)
. So we detect when UVs are in the 0-1 range and avoid scaling them. We still compress to 16-bit though to respect the compression setting.While implementing this fix, I noticed that the code to retrieve arrays from the GPU wasn't unscaling the UVs as it should and was returning the scaled UVs. This was an oversight from when compression was added as, originally, we only compressed UVs in the 0-1 range and we never applied scaling.
I have added the code to unscale.
This second bug hasn't been reported yet, but it can be reproduced trivially with the following code:
With no compression this prints
[(0, 1.5), (0.5, 0.5), (0.5, 0.5)]
,In Beta 3 this prints
[(0.499992, 1), (1, 0.666667), (1, 0.666667)]
With this PR, this prints
[(-0.000008, 1.5), (0.5, 0.5), (0.5, 0.5)]
Finally, this binds
ARRAY_FLAG_COMPRESS_ATTRIBUTES
which was missing in the API