-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove dependence on WEBGL_color_buffer_float for EDL #6801
Conversation
Thanks for the pull request @likangning93!
Reviewers, don't forget to make sure that:
I am a bot who helps you make Cesium awesome! Contributions to my configuration are welcome. 🌍 🌎 🌏 |
Fine in CV, seems to actually work better now in 2D than before but still not perfectly, still no EDL but now points still show when EDL is on. |
'void main() \n' + | ||
'{ \n' + | ||
' czm_point_cloud_post_process_main(); \n' + | ||
// Write log base 2 depth to alpha for EDL | ||
' gl_FragData[1] = vec4(v_positionEC, log2(-v_positionEC.z)); \n' + | ||
' gl_FragData[1] = czm_packDepth(gl_FragCoord.z); \n' + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to still use a varying instead of gl_FragCoord.z
, since apparently on some platforms this has the same precision problems as depth textures, but I couldn't get it to work for some reason... the EDL effect happened, but it seemed like the depth value was wrong in some places.
However! I've only observed the precision problem on newer iPads, and the iPad lacks other necessary extensions, so in practice it shouldn't make a difference to just use gl_FragCoord.z
. It's also cleaner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For posterity: mrdoob/three.js#9092 (comment)
👍 |
Fixes #6794.
Fixes #6792.
Closes #6543.
When looking at #6792, noticed that EDL was using a lot more gbuffer data than it really needed... we were storing eyespace positions as RGBA float, when that could all be re-derived in the EDL pass from a much more compact packed depth.
I'm not seeing much of a performance change on either my GT750m/Windows or Intel/Linux systems, but it's nice to reduce requirements and memory use anyway.
Also discussed with @lilleyse offline the possibility of removing the MRT requirement by using a double-size gbuffer (so, 1920 x 2160 for a 1920 x 1080 render), but decided the science experiment for performance might not be worth it since
frag_depth
will always be necessary.Mentioning it here if anyone thinks otherwise, or thinks it would be fun to try.