-
-
Notifications
You must be signed in to change notification settings - Fork 35.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Antialiasing has artifacts when logarithmic depth buffer is enabled #22017
Comments
Do you mind sharing a live link that demonstrates the aliasing? |
I have tested webgl_camera_logarithmicdepthbuffer on three systems (Android, macOS and Windows) with latest Chrome and I don't see the reported aliasing.
Um, could be a GPU driver related issue. In this case, you might want to try to update the driver or switch to one from a different vendor. |
Indeed. Sounds like a Linux driver issue indeed. |
@asbjornlystrup Please update your example to the current three.js revision. // Hmmm... I'm seeing the same issue on my M1 iMac. @Mugen87 Disabling // hack WebGLProgram.js
rendererExtensionFragDepth: false, // isWebGL2, || extensions.has( 'EXT_frag_depth' ), |
Here's the fiddle with v128 (I didn't manage to find an online source for v129): https://jsfiddle.net/kdLxsf9u/4/ |
Okay, with the new fiddle I see the problem on Windows and macOS.
Yes. |
Would it be a bad idea to stop relying on |
I guess I would first report the issue to the Chromium team. Maybe it's possible to fix this in the browser implementation/ANGLE. |
I can reproduce on Firefox and Safari too though. Also, here's a "easier to see" version of the fiddle: https://jsfiddle.net/gs231L0u/ |
Maybe related (source):
|
That makes me more confident that we should just use the emulated logarithmic depth buffer code. |
Just catching up on the thread now - it's been several years since I actually used the logarithmic depth buffer code myself - I stopped using it because, while it does help in specific scenes - like the example scene, which as pointed out doesn't have any intersecting objects - the way it interacts with the depth buffer means it introduces a lot of edge cases. The way it trades off floating point accuracy means the z-buffer is more evenly spread across the whole range, but at the expense of losing accuracy for objects that are close together. I'm not specifically familiar with how the gl_FragDepth extension interferes with MSAA, and whether they might be fixed by changing the ordering of the shader chunks, but besides that issue there are some trade-offs both with and without the extension. Without the extension, there are issues where if the interpolated depth is applied over a large enough (in screen-space) triangle, the inaccuracies introduced by that can cause z-fighting that's even worse than what you were trying to avoid in the first place, and the suggestion from the original article I based this implementation on is to dynamically tesselate your objects to prevent this situation from occurring. With the extension, you get per-pixel depth values which mostly resolves the issue with large triangles, but at the cost of disabling some z-test optimizations, and introduces these weird interactions with other parts of the fragment shader which might try to use the z-buffer value later on. So I'm not sure if I really have a strong recommendation either way, since both approaches have trade-offs depending on the type of scene you're working with. |
Considering this feedback, I vote to keep both approaches in the engine since it is just more flexible. Depending on the use case, the extension can be preferable over the emulation and vice versa. |
For the record, we implemented a later version described in |
As implemented, the user has no control over which technique is used. If you want flexibility, you will have to change the API. |
I forgot that #22017 (comment) was a modification in the renderer. In this case, a parameter that controls the process might be a choice. E.g. |
What's your approach now? |
And then there is this... Reversed Depth Buffer |
Well, mostly I'm no longer working on scenes with the type of scale that needs logarithmic depth buffer - when I started using Three.js I was working on a space sim engine with 1:1 planets with seamless transitions from space to ground, so supporting a wide range of Z values was important, but since then my engine has evolved to be used more for human-scale worlds, and content creators can specify their own static near/far values based on their specific scenes. I also experimented for a while with adaptive near/far values, where each frame, I'd look at all the objects in the view frustum with an infinite far plane, then sum up their distance + bounding sphere radii to calculate new near/far values on the fly to neatly frame the scene based on its contents - this approach worked well enough, but again, since I kinda shifted focus it ended up being complexity that wasn't really necessary for the most common use cases, so I stopped using it as well. |
Note that reverse depth buffer does not work in WebGL because of the [-1..1] range of Z in NDC. See https://developer.nvidia.com/content/depth-precision-visualized:
However, it can be used in WebGPU which has a [0..1] range. For eg: https://playground.babylonjs.com/#67JFXI#15 => in WebGPU, you don't have z-fighting between the two green and gray spheres (at least at the starting position - if you move the camera you will see some artifacts. This PG is only used for validation tests internally, it is certainly not advisable to have two objects separated only by 0.00015 units!). |
After removing WebGL 1 support from the renderer, there is only one code path left which always writes to I would not avoid the usage of |
FWIW and future readers, setting that variable opts out of early-z optimizations which is critical for mobile/TBDR GPUs and overdraw. This was acknowledged in #22017 (comment). I'll open a new issue if I find anything actionable on that end. I notice the older code paths were removed with WebGL 1, so I had to check here if anything improved or if there were any experiments with using the old code path on its own or combined with use of |
When the logarithmic depth buffer is enabled, antialiasing fails in areas where geometry intersects or lines up. See screenshot below. It feels like this would be a common issue, but I couldn't find any other post on it.
With logarithmic depth buffer enabled. Notice the jagged, aliased lines.
With logarithmic depth buffer disabled.
I've encountered this on desktop computers. I've seen it on both Linux Ubuntu and Fedora, in the Chrome browser. I don't remember how it looks in Windows, but it seems a bit strange if it would be different there?
Right now we're at version 0.129.0 of threejs, but it's always been like this since we started using logarithmic depth (1-2 years ago), and it happens with all geometry.
Is it like this for everyone else? Any idea how to fix it, or is it something we just have to live with?
Thanks.
The text was updated successfully, but these errors were encountered: