-
Notifications
You must be signed in to change notification settings - Fork 902
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[d3d9] Determine depth bias r-factors #2931
Conversation
gl_Position = vec4( | ||
-1.0f + coord.x, | ||
-1.0f + 3.0f * coord.y, | ||
gl_VertexIndex == 2 ? 1.0 : 0.0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Making a gradient?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
For floating-point depth attachment, there is no single minimum resolvable
difference. In this case, the minimum resolvable difference for a given polygon is dependent on the
maximum exponent, e, in the range of z values spanned by the primitive. If n is the number of bits
in the floating-point mantissa, the minimum resolvable difference, r, for the given primitive is
defined as: 2^(e-n)
Emphasis mine. But because of that we can't really solve it with a simple factor. I went with the factor for a range from 0-1 as a best effort.
The top left vertex is supposed to be at depth 0 because that's what we read back for the depth bias.
Unfortunately it doesn't seem to fix some of the depth bias problems and returns rather weird values for AMD. On Nvidia, I get 1/(1<<23) for D24 and D32 and 1/(1<<15) for D16. Wine calculates the same values. CME got:
which doesn't match what Wine calculates. |
@CME42 Can you try with nodcc |
e42c0f9
to
0c4c12d
Compare
0c4c12d
to
d7870fe
Compare
Ref: #2892
I've only tested it on Nvidia and it seems to work.