-
-
Notifications
You must be signed in to change notification settings - Fork 35.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Basic deferred rendering #2624
Comments
That's really cool. Deferred rendering is something I wanted to try, just I was postponing it till there wouldn't be multiple render targets, which finally seems going to happen: https://www.khronos.org/webgl/public-mailing-list/archives/1210/msg00046.html BTW on my notebook with Nvidia Quadro 2000M I get ~27 fps on your demo. It would be nice if you could release your code, even if it's messy, if nothing else at least to get the feel about performance profile of deferred rendering approach. |
Another demo result: 50-60 fps with the default viewport, 30 when zoomed out to view the whole scene. Chrome 24b, Linux, GTX460. I also started my own deferred render a while ago, but did not come even close to getting anything to render. Glad someone else is trying it as this is very interesting. |
Hmm... My frame rate doubles when zooming out to the whole scene -- from 20 fps with the default viewport to 40 fps zoomed out. OSX Chrome 23.0 AMD Radeon HD 6750M. |
Thanks for your comments and posting some test results. It's really interesting. @alteredq Ok sure, why not. However the thing is currently kind of massaged into our web deployment pipeline. I need to extract the relevant parts and put together a single demo file. @tapio @WestLangley Yes, changing fps depending on the zoom factor is normal behaviour. The lighting calculations are executed inside the fragment shader for the light proxies. So if you move further away, less pixels are affected and less lighting calculation is performed. You also have to be careful, how you place your lights inside the scene. If for example you have a scene with many overlapping light sources, then lighting calculation might be performed multiple times per pixel, and this is of course problematic regarding performance. |
I reduced the number of lights in the demo, as 440 seemed a bit too heavy for some mobile cards. |
Ok, here is the source http://data.redplant.de/webgl/deferred/publicdemo.html There is however one problem left. I'm using a shader minifier to convert my glsl source files into javascript string arrays, and I'm not sure how to tell him not to rename, local variables. Therefore the shader code is not perfectly readable. But maybe this already helps to get an impression on how it's done. I will update the code, once I figured out how to preserve all variable names. Cheers |
Shader code is updated |
It took me a while to understand that the wireframe spheres were the light emiters. |
@MPanknin Cool, thanks for sharing. Would you want to contribute this as three.js example? It could be a good starting point for eventual full blown deferred renderer later on. If you would just add the necessary files to your three.js repo clone examples, we could merge it and continue with development. Don't worry about having it perfect, what you already posted is a fine start. |
@mrdoob You are right, the wireframe spheres really were a bit confusing. I replaced them with solid colored spheres as emitters. I also prepared a new demo based on the link you provided. This example shows only the lighting part. I removed all the shadow mapping parts and cleaned up the code a bit. I might put together a second example only for the shadow mapping part. @alteredq Of course. I would be glad to contribute this as an example. |
- made it work with r54dev - cleaned up formatting - fixed light pass not actually moving light proxies (there is less overdraw but lights now can get culled sometimes, need to fix this) - changed high-def Walt to be UTF8 model - made geometry passes work with hierarchies See mrdoob#2624
@MPanknin Thanks, I merged the example from your branch. I tried to play a bit with the example, some observations:
|
The example is now a little bit more awesomer ;D |
;) Meanwhile I also added support for material colors and textures, just as Walt doesn't have these I switched to Ben. Still so many things to do, e.g. I'm failing miserably on trying to combine together more G-Buffer passes by packing more data into render targets, but it's fun. |
Good to hear, that you are having fun with the demo. :) @mrdoob Nice! @alteredq Also nice! Started implementing Spotlights yesterday. I'm however not sure if it should become a separate shader or if different lighttypes should be Any ideas? |
Maybe it would be better to start with separate shaders per light type? We could then optimize / condense later when we learn more about how things work. BTW for spotlight proxy, maybe |
I guess it could be also good to start moving all this to a |
Eventually yes, that's the plan. For the moment I don't understand the problem space good enough yet, it's easier to tinker with pieces being on the application level. Also, deferred rendering is not really like other renderers, it's not self-standing, it needs Structurally, the closest to this are maybe our stereographic 3D "effects". They take existing renderer, scene and camera and render them in a different way. I believe it'll become more clear how to "package" this with more use, e.g. shadow maps or |
Yep, you need |
I forgot. |
Yup yup, handling transparency would likely need a full blown forward rendering pass. Meanwhile it occurred to me we could also do it in "yo dawg" way and have BTW @benaadams mention on Twitter they used billboards for light proxies in their deferred rendering experiment: http://www.illyriad.co.uk/blog/index.php/2011/11/webgl-experiments-illyriads-3d-town/ I wonder how well would that work. Performance wise I guess overdraw would be still as bad as it is now but at least it should help with missing light artefacts when the camera is inside a light volume (because light proxy is then culled away by backface culling, and you can't just turn culling off because proxies are additively blended transparent objects). |
For the linked demo (Nov 2011) the unminified script is here, which might make more sense (though is deadline rushed mess): http://www.illyriad.co.uk/3dDemo/newGame_conf3.js Lights are flipsided and light rendering was: _gl.disable(_gl.DEPTH_TEST); The shaders are a little over complex as we also decompress paletted/indexed dds files in the shader. For that demo we did use spheres for lights, though as a merged mesh with light centers as vertex attributes. For the newer stuff with billboards, when inside the light we switch to screen quads, and try to rebuild the object coords using the frustum corner coords with mixed success (something more like this http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/) |
We are now using billboards for lights as less vertex attributes to update when moving lights with all lights in a buffergeometry (and you don't need the extra triangles, is just circle equation in box, based on distance to light center), so I think that't more of an implementation artifact than anything else. However I think turning off the depth test and doing flip-sided rendering with the blend function ONE, ONE_MINUS_SRC_ALPHA would help with the missing light artefact - though you may have to be careful with light placement. |
@benaadams Lots of useful things, thanks ;) After playing a bit with deferred rendering, I got impression the biggest performance challenge will be reducing overdraw. Seems like geometry costs are relatively small compared to pixel costs (or more precisely fragment shading and what I think is called ROP). Rendering similar numbers of objects of similar complexity would be easy for forward rendering. What I think kills it in deferred rendering is that without z-buffer help and with blending on suddenly there are many more pixels to take care of. I remember some game developer presentation about optimizing particle effects by getting tighter geometry shape fit to sprite images instead of just usual rectangular billboards - you get higher triangle count but reduce overdraw. Which is kinda in the direction of what we get with full geo proxies. Anyways, this is cool, many new toys to play with. |
Work in progress on the spotlight. http://data.redplant.de/webgl/deferred/spot/ Proxy geometry is just a fullscreen quad for now, as I'm having some problems constructing the cone geometry properly. But I'm getting closer. @benaadams thx for sharing. |
Added basic shadowmapping. There are however still some artifacts left. |
Cool cool ;). Meanwhile I already got quite far towards |
Awesome, looking forward to it. Can you already share any details on what you are doing? I found a couple of interesting things that can be done to optimize the existing code. For example view space normals can be en/decoded using a spheremap transform. This would reduce the number of required channels in the g-buffer to two. Here is an article describing this technique (and a couple of others). http://aras-p.info/texts/CompactNormalStorage.html#method04spheremap Also it could be beneficial to store normalized view space depth instead of clip space depth. The reconstruction can then be done using the frustum corner technique. This reduces the number of instructions needed for the reconstruction. http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/ Implementing this technique is actually straight forward. I did it once for another demo in rendermonkey. Also the way the emitter's are currently rendered seems a bit adventurous to me, there must be a nicer way to do it. I'm sure there is more. |
It's basically this (I just need to rename it): https://github.com/alteredq/three.js/blob/dev/examples/js/DeferredHelper.js I don't have yet solved all the things, but basic structure should be already there. The idea is that you will use it as other renderers, from API point of view all the magic is happening behind the curtain, you just pass in On the inside then we should have a playground for trying things. Right now there are three geometry passes but I was planning to merge them into two. Single geometry pass is also possible in theory but it would make for too limited material system. So I'm aiming for 2x RGBA floats in geometry passes. Whatever we will use will need to be crammed into these. For example what we use now: G-buffer color RGBA
G-buffer normal RGBA
G-buffer depth RGBA
This can be packed into: G-buffer color RGBA
G-buffer depth+normal+??? RGBA
I changed it into having emitters as regular scene objects with pure emissive color. Then emissive color is handled in full screen light pass, similar to directional lights. Also @won3d did his own optimizations, I'm curious, hope we'll able to merge these. The biggest performance suck though is overdraw while rendering of light proxies. There are supposed to be some techniques using stencil buffer to help, but it's kinda involved, if I understood well it's something like stencil shadows. |
Sorry, @alteredq, I set up my github and everything with the intent of getting things merged, but for some reason work expects me to...do work. In any case, I think your packing ideas are pretty much what I had done, at least for the deferred shading. I like the idea of having a separate WebGLDeferredShadingRenderer. I'm not sure if you should pass in a WebGLRenderer, or whether it should be created within. Perhaps the latter; that way you can interpret { antialias: true } using FXAA, since it otherwise doesn't make sense to do MSAA for deferred shading. Now, if you had a WebGLDeferredLightingRenderer, it is a different story. Re: G-buffer packing I'm not sure if it is worthwhile to pack normal if you're just going to leave some channels unused. That being said, I can vouch for the stereographic normal projection (it's what I did, and mentioned in @MPanknin's Aras link). Also, if you're storing depth in a floating point, you want to map it so that the near plane is 1.0 and the far plane is 0.0. You might also want to make the far plane at infinity, since that removes another source of numerical imprecision. If you want to use an 8-bit fixed point representation for depth, you should use a log mapping: http://tulrich.com/geekstuff/log_depth_buffer.txt For color buffers, maybe try chroma subsampling: http://graphics.cs.williams.edu/jcgt/published/0001/01/02/ That is, if you're doing 2 passes, maybe one can be low-rez (have the 6 chroma channels). Re: light proxy overdraw I mentioned over e-mail that one way to solve this would be to share the depth buffer between the g-buffer pass and the light proxy rendering. That would also remove the explicit discard in the light proxy shader, which deals with the case when the light is behind the scene. To cull lights that would only light the background, you could do something really simple with stencil (set a bit to true for each non-background pixel). |
@won3d Thanks for the ideas, a lot of food for thought. Don't worry about merging, you help how you can ;) Sidenote: it's awesome to see a paper with a live WebGL example, hope this will become a thing. |
…and light passes. See mrdoob#2624
…pth test. It makes lights work when the camera is inside the light volume and somehow it's also 10% faster. You just need to make sure camera far plane is far enough to encompass the whole light proxy, otherwise there are not lit slices of proxy sphere. (I don't really understand why flipping the depth test works, originally depth test was supposed to be just disabled) See mrdoob#2624
Another work in progress. Deferred Arealights. http://data.redplant.de/webgl/deferred/spot/deferred_arealight.html It is based on this post by Arkano22 over at gamedev. This, however, is the deferred version, as you might have quessed. Currently it's only diffuse lighting, no specular yet and it also does not support shadows. In the gamedev thread it is suggested to use a very blurred PCF shadowmap, but I haven't tried that. Proxy geometry is rendered as fullscreen quads (I was lazy), so there is definitely room for improvement. @alteredq I didn't have time to look at |
@MPanknin Beautiful |
@MPanknin Yes. Beautiful. Thank you for sharing your work. |
@MPanknin Cool cool, don't worry too much about merging, if not you I should eventually get to this (the same like for spotlights and shadows). For area lights, I was already thinking about them: I guess we should have |
@MPanknin Very sexy! ^^ |
Todo: - physically based specular - wrap around lighting - light cone proxy instead of fullscreen quad - light distance attenuation - move spot angle cos out of shader - move light direction out of shader - shadow maps See mrdoob#2624
To be continued ... todo: - optimize vectors that don't need to be computed in shaders - use material albedo - add specular term - wrapAround lighting (if possible) - make attenuation parameters uniforms or defines instead of hardcoding them - this is not using surface normal anywhere, this can't be right? - maybe some box proxy instead of full-screen quad See mrdoob#2624
So I have been working around with the Deferred Renderer here and I can not seem to remove a deferred point light from rendering. I remove the light and any associated meshes from the scene yet the actual light still renders! Any ideas on how to pull it out completely? |
Any updates on this? |
Do anyone also know this: https://github.com/YuqinShao/Tile_Based_WebGL_DeferredShader it seems to be implemented using three.js, I'll be experimenting with deferred rendering in the near future, probably will be using this project as a base. |
@deadForce Interesting project! Seems like it only uses three.js for the |
2015 calling here. Is anyone actively working on a new WebGLDeferredRenderer at this point? I want to know before I dive into this rabbit hole; or, worst-case-scenario, abandon THREE altogether (please no) as unfortunately my project's visual design absolutely requires it. Are there any known significant roadblocks due to the way THREE was re-engineered from R71->R72 that blocks the deferred rendering path? |
I think the issue is WebGL 1.0 doesn't really support multiple render targets except through the poorly supported WEBGL_draw_buffers extension (<50% of browsers, almost no mobile devices.) Thus leading to deferred not really being easy to implement in a way that is efficient. I think everything changes with WebGL 2.0, if and when that arrives. |
That's good to know, thanks! As WebGLDeferredRenderer was always part of Extras, is the plan currently to make it into main WebGLRenderer when better support is available? We can use WEBGL_draw_buffers as-is for our projects since our target is not mobile, and even the previous version of the renderer offered decent performance already. Anyway, if anyone has plans to do a deferred rendering pipeline down the road with THREE.js in WebGL 2.0 then I can just keep working as if nothing's changed on R71. |
If you are willing to do some jiggery-pokery to pack multiple values into a That chapter is the second best thing Nick wrote for that book -- the best On Thu, Nov 19, 2015 at 7:41 PM, Michael Chang notifications@github.com
|
Hi everybody,
just finished a first version of a deferred rendering example using your lovely library.
http://blog.mpanknin.de/?p=848
It currently supports point light sources as well as deferred shadow maps.
It does not yet support everything else such as spot lights, point light shadows, etc. There's still a lot to do, however it can already handle a decent amount of point lights. I was able to render > 1000 point lights on a GTX560 this afternoon. Framerate was something around 50.
The G-buffer has to be filled in two passes unfortunately. One for depth and another for normals. No support for multiple MRT's.
If you are interested then I'm sure that I can release the source sometime. However before doing that I need to clean up a couple of things as the code is a bit messy here and there.
What do you think?
The text was updated successfully, but these errors were encountered: