Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a more powerful and customizable workflow for custom post-processing effects #2196

Open
Arnklit opened this issue Jan 28, 2021 · 20 comments

Comments

@Arnklit
Copy link

Arnklit commented Jan 28, 2021

Describe the project you are working on

Various 3D projects

Describe the problem or limitation you are having in your project

The current workflow for adding custom post-processing cumbersome, cannot be previewed in editor camera and is limited to being applied after all built-in post-processing it also makes it complicated to set up game settings for the user to enable / disable custom post-processing effects. I assume it also ends up using new buffers for each effect added on, making it more performance expensive than if it could reuse buffers in the built-in post-processing stack.

Describe the feature / enhancement and how it helps to overcome the problem or limitation

I'd like the ability add custom post-processing in the WorldEnvironment node and choose which order it gets applied in the stack.

Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams

I could imagine an interface like this:
image

Where you could add a list of post-processing shaders and manage their shader parameters and decide when they were applied in the stack.

Possibly another shader_type would be added that had specific hooks for using the same buffers as the rest of the stack, if that is not possible with canvas_item.

If this enhancement will not be used often, can it be worked around with a few lines of script?

This could not easily be added with a few lines of code and would be used often.

Is there a reason why this should be core and not an add-on in the asset library?

This would have to be done in core.

@Calinou
Copy link
Member

Calinou commented Jan 28, 2021

reduz suggested adding a FullScreenQuad node which would consist in a single triangle covering the whole screen. This would make post-processing effects easier to add while making them still easy to distribute on the asset library.

(A single triangle covering the whole screen will be minutely faster than two triangles, at least on desktop platforms.)

@Zireael07
Copy link

A fullscreenquad node that automatically covers the screen would be excellent - with my custom post-process motion blur, I sometimes see the effect 'lagging' behind ...

@clayjohn
Copy link
Member

reduz suggested adding a FullScreenQuad node which would consist in a single triangle covering the whole screen. This would make post-processing effects easier to add while making them still easy to distribute on the asset library.

Note: The FullScreenQuad idea would be sorted into the alpha pass of the regular render pass and still use a spatial material internally. Users would be responsible for setting the material to transparent and unshaded. Importantly, this runs before tonemapping and built in post processing. So it would be rather limited. It is essentially a built-in way of doing the post processing method described in the docs.

The FullScreenQuad method would, however, be very simple to add and wouldn't require any changes to the renderer (i.e. it can just be added as a node).

We also discussed something like this proposal earlier. i.e. a custom shader_type post-process that is inserted into the built-in post processing shader. The downside of this is that it is rather complex and not really flexible for when you want to, for example, chain multiple effects together.

Overall, I am unhappy with either approach. I think in the long run, we need to redesign how we handle post-processing to better support making custom post-processing effects.

@Ansraer
Copy link

Ansraer commented Mar 28, 2021

mux and clayjohn had a short discussion about this on the #rendering rocketchat channel today. They proposed a completely different approach to this: using a post processing graph:
image
(quick mockup I created)

Both built-in effects and user-created ones would be available as nodes and could be chained together in any order.

@Zireael07
Copy link

Tangent: I was sure I had mentioned it, but currently post-process effects affect everything (e.g. gizmos) - we need a way to exclude some nodes/visual layers from them.

@Calinou
Copy link
Member

Calinou commented Apr 13, 2022

Tangent: I was sure I had mentioned it, but currently post-process effects affect everything (e.g. gizmos) - we need a way to exclude some nodes/visual layers from them.

This is being tracked in #2138.

@wareya
Copy link

wareya commented May 15, 2022

I hope that whatever system is decided on here doesn't make it unnecessarily complex or have unnecessary drawbacks.

Background behind my opinion follows. Skip to the bottom to see what I actually have to say.

One of my projects involves being able to load maps from Quake-like games, but also involves HDR lighting.

Going ingame with the default tonemapping results in crushed, oversaturated colors, because the default tonemapping is just a clipping function (blue light added to help demonstrate the limitations of pure clipping):

https://user-images.githubusercontent.com/585488/168461301-9d29c9d6-f931-47bc-83d2-c6b734cbace8.jpg

Using ACES Fitted avoids this, but changes the overall lighting balance of the scene, because ACES fitted does a lot more than just desaturate very bright colors when they clip. This is undesirable, because this project is using maps from pre-existing games, and their lighting is no longer being replicated remotely faithfully:

https://user-images.githubusercontent.com/585488/168461315-277497dc-2d2b-447a-b8bf-0c8de4e42679.jpg

I wrote a custom tonemapping shader that "just" desaturates high-energy colors and it looks fine:

https://user-images.githubusercontent.com/585488/168461347-2427e279-adcf-49e0-ab2f-45a250f06aa7.jpg

(Side note: custom tonemapping curves would not help me here, only custom tonemapping cubes or custom tonemapping shaders.)

Custom tonemapping shaders are not (yet?) supported, and the workaround necessary to make it work, using a viewport and a viewport container or viewport texture, has a lot of drawbacks (involving Controls, window scaling modes, etc). It also means that the lighting cannot be previewed in-editor accurately, because the custom tonemapping shader isn't part of the 'environment'.

Salient point: if the system decided on here causes problems for Control nodes, or window viewport scaling, or has any of the other downsides that the viewport-based workaround has, it may end up being disused. Custom post-processing should be basically transparent to the rest of the development experience, including w/r/t interaction with other, unrelated features (like canvaslayers, window viewport scaling, etc). Approaches that are not basically transparent once they're set up should be scrutinized heavily to see if their tradeoffs are actually necessary. As such, I'm skeptical of the FullScreenQuad approach.

@Calinou
Copy link
Member

Calinou commented May 15, 2022

Using ACES Fitted avoids this, but changes the overall lighting balance of the scene, because ACES fitted does a lot more than just desaturate very bright colors when they clip. This is undesirable, because this project is using maps from pre-existing games, and their lighting is no longer being replicated remotely faithfully:

As an aside, remember that id Tech 3 lightmaps are designed to be displayed with some kind of overbright management. This was done to compensate the lack of HDR lightmaps, since lightmaps were stored in a LDR format for performance reasons. This is controlled by the r_mapOverBrightBits cvar, which defaults to 2.1

There's also the r_intensity cvar which multiplies the brightness of all textures (including non-world textures, so it can have undesired effects on the HUD). I think r_intensity also multiplies the brightness of the lightmap itself, but I haven't verified this.

I've found that most id Tech 3 maps look subjectively better if you reduce r_mapOverBrightBits to 1 and increase r_intensity 1.5. It gets rid of the notoriously "dull" look of some maps, especially in Enemy Territory.

That said, when using ACES tonemapping, this kind of tweak is probably counterproductive. It's worth keeping in mind if you intend on using linear tonemapping still (e.g. because of technical limitations or to maximize performance).

On top of that, you may also want to add some constant ambient lighting to the whole scene. id Tech 3 doesn't have a built-in cvar for this, but I've found that it can help with areas of maps that are too dark (which occurs more often with the aforementioned tweaks). DarkPlaces has an r_ambient cvar that adds to every texel of the lightmap (this is different from Godot's implementation, which max()es every texel with the ambient light instead). It will brighten the entire scene a bit, but it often looks subjectively better.

You can probably simulate the above tricks in Godot by manipulating the lightmap data with the Image class before loading it. If performance is an issue, you can cache the lightmap data to disk once it's been manipulated.

Vanilla Tweaked Tweaked + Simulated ambient2
vanilla tweaked tweaked_plus_simulated_ambient

Footnotes

  1. There is also r_overBrightBits, but I personally haven't played much with it.

  2. I added constant brightness using GIMP. It'd look less dull if the actual lightmap data was modified.

@wareya
Copy link

wareya commented May 15, 2022

Definitely well-aware of the overbrightbits stuff. The main issue with ACES Fitted for me here, pushing me to post-processing tonemapping, is how it crushes dark areas in ways that the original engine does not (and as such, that the original map designers did not account for). If there was a way to use ACES Fitted without the weird stuff happening in the dark end of the tonemapping curve/curves, I might be able to use it, and not have to rely on post-processing.

(rest of this post is a tangent that doesn't really have anything to do with this proposal, feel free to ignore)

(Importantly, in quake 3, when using the mapoverbrightbits stuff on an oldschool computer setup with the original opengl1 renderer, you were expected to have a brighter-than-normal monitor, making up the lost brightness. It also worked differently in windowed mode than in fullscreen mode, i.e. not at all. So when using ioquake3's opengl2 renderer, or anything similar, where the design constraints are different, the exact settings you use are a bit touchy. See also: ioquake/ioq3#178. This is what I get with my current ioquake3 opengl2 settings, which doesn't lose out on overall brightness like your tweaked screenshot does: https://user-images.githubusercontent.com/585488/168485094-339e88d4-cd99-4ac2-a1cd-17b37be5c3b6.png)

(As a side note, my earlier in-godot screenshots are using only the LDR part of the lightmaps, because the model conversion process I'm currently testing with clips off the HDR part of the lightmaps, with the sky-lit areas being lit up by a directionallight instead. Room for improvement, I might have to build tools for dumping HDR lightmaps manually if I can't find them, but unrelated to the proposal this thread is about.)

DarkPlaces has an r_ambient cvar that adds to every texel of the lightmap (this is different from Godot's implementation, which max()es every texel with the ambient light instead).

I'm, uh, actually doing witchcraft and loading the lightmaps as gray AO and then attempting to reintroduce the color with a second material pass in multiply blend mode. Still working on making it accurate (and might not be able to make it 100% accurate), but it means I don't (yet) have to interact with godot's lightmap system (which seems largely based around in-engine baking and I haven't figured out how to shove pre-existing lightmaps into it yet). (colored AO when?) (I know colored AO is super nonstandard but it would simplify importing models that have already had full, colored light simulation done to them)

@clayjohn
Copy link
Member

clayjohn commented Jun 2, 2022

The below is a sketch of some rough ideas that will have to be developed further

Reduz and I discussed post processing again today at the Godot sprint. We agreed on the following:

  1. To support this, we need to expose more resources used by the renderer to script (e.g. users need access to the backbuffer, depth buffer etc. from script)
  2. We need to implement hooks in the renderer where render passes can be inserted (this doesn't necessarily need to be constrained to post-processing)
  3. Ideally, an implementation would look something like a RenderingProcess resource that can be added to the Environment, the RenderingProcess resource would expose a script that issues some rendering commands using the rendering device. This is a alternative to using a visual graph (render passes are essentially a form of graph)

@Calinou
Copy link
Member

Calinou commented Jun 15, 2022

Using ACES Fitted avoids this, but changes the overall lighting balance of the scene, because ACES fitted does a lot more than just desaturate very bright colors when they clip.

Not related to this proposal, but this makes me wonder if we could add shadows/midtones/highlights adjustments to Environment (as part of the adjustments checkbox). This would allow for more gradual adjustments of brightness compared to just adjusting the entire scene's brightness. For instance, to counteract ACES' overall darkening of the scene, you could set Shadows to 1.4, Midtones to 1.2 and Highlights to 1.0 (all values default to 1.0).

I've seen other engines that support this out of the box, but I don't know how expensive this kind of filter is.

@WrobotGames
Copy link

WrobotGames commented Jul 23, 2022

Should this 'post process shader' be part of the environment resource or the camera effects resource?
Or is the camera effects resource reserved for 'true' camera effects. (Exposure, DOF, motion blur, filmgrain, vignette). (Shouldn't glow and adjustments be part of this resource then?)
Kinda vague

@clayjohn
Copy link
Member

Should this 'post process shader' be part of the environment resource or the camera effects resource? Or is the camera effects resource reserved for 'true' camera effects. (Exposure, DOF, motion blur, filmgrain, vignette). (Shouldn't glow and adjustments be part of this resource then?) Kinda vague

Right now the CameraEffects resource is more reserved for "true" camera effects. But it doesn't necessarily have to remain that way. In my opinion, the custom post processing should be implemented in Environment first, then an override can be added to CameraEffects if there is justification/demand.

@h0lley
Copy link

h0lley commented Oct 4, 2022

in terms of usability,
how about resources for each effect that are plugged into an array held by WorldEnvironment.
array items can now be nicely ordered via drag and drop in the inspector.
it could be an inheritance tree like
Resource > PostProcessingEffect > FogEffect
and we add our own by extending PostProcessingEffect

that could also make for a better split between 2D and 3D, related: #4564

@darthLeviN
Copy link

darthLeviN commented Nov 29, 2022

The workflow isn't the only thing that needs to be changed. The current system has overhead. It renders everything into a separate texture and then stacks it on the main one instead of directly rendering into an existing one.

There has to be a way to tell a viewport to reuse an existing viewport buffer. I suggest the changes below :

1- A new object type called ViewportHook to be made that connects to a already existing view port (with the option to connect to the main one) that has a option to either clear the previous depth buffer, take a snapshot of it (and clear or not clear after that) or not touch it at all.

  • The depth buffer snapshot could be accessible through a shader variable called DEPTH_SNAPSHOT .
  • If there is no depth buffer because there is no 3D rendering (not sure if disabling 3D will get rid of the depth buffer) then depth buffer related features would be ignored.

2- Create a new material/shader type called "viewport shader" that has DEPTH and ALBEDO inouts, maybe a SKY out? not sure what can be added here. Some matrix built-ins are needed for sure. This shader is then plugged into a Viewport or ViewportHook. next pass should be allowed to stack shaders.

  • For all shaders, not just these shaders an additional disabled property should be provided to skip the shader. if one is stacking shaders and wants an effect to be enabled/disabled in a game's options menu this property will be used.

  • Adding any viewport shader should disable any automatic environment post processing functions from acting.

3- add new material type ViewportMaterial : it's basically moves all the environment management to a material that's plugged in the viewport.

  • add a checkbox called inherit that disables all other options and just executes the current WorldEnvironment. And will lead to classic functionality along with post,pre processing. After checking the inherit option some other options should activate which will allow you to exclude things like sky and fog rendering by default.

  • To support advanced features provided by ViewportMaterial something has to be done because basic ALBEDO and DEPTH won't be enough. For these features special functions can be considered like : fog(), vfox(), bloom(), glow(), lens_flare().

4- In project settings one should be allowed to select a viewport material for the default viewport.

Benefits of this setup is :
1- Allows the most advanced users to avoid a lot of overhead
2- Allows the existing visual shader editor to give a visual perspective on the rendering workflow if needed.
3- Adds to the current system instead of changing it and is Backwards compatible.

this is the best i could come up with. idk if it has any flaws.

Just wanted to add that i don't think there can be a great workflow solution for post processing in godot 4. But rather something thats 'OK'. The backbone for the workflow upgrade should be put in before there is any workflow upgrade.

@VantaGhost
Copy link

I have been trying to implement various post processing effects in Godot 4 and have had a rather frustrating experience, particularly with more complex effects but even with simpler ones. I definitely think Godot needs a dedicated way to place post processing effects into the existing post processing stack.

Both the ways that the Godot documentation suggests doing post processing come with crippling limitations:

  • Using a Canvas Shader you have no access to godot's various buffers, and are forced to apply effects after all other post processing has rendered to color data.

  • Using a Fullscreen Quad, you now have access to some buffers, but the effect is rendered during transparencies which can cause all sort of sorting issues, and before all other post processes. Rendering during transparencies means that only the opaque objects appear in the Screen Texture image, essentially erasing them from the render if you wish to draw to color directly, or use them for your effects. (The fact that the Screen Texture only renders once after opaques is it's own issue, but related here).

An additional feature that would benefit a new post-processing system greatly, would be the ability to write to the buffers, just like the built in processes are able to (I imagine, though I might be wrong).

These features would massively extend the capabilities of the renderer without needing to modify engine code, or over-complicate things for those why just want to use the built in effects.

OK, thanks for reading my rant. I'm really enjoying Godot so far, just finding a few features lacking, particularly in the technical art department.

@MegadronA03
Copy link

MegadronA03 commented Sep 15, 2023

for post processing I think its better to add 1 node:

  • one for modifying the way material is drawn to the screen (post processing effects (it could be used for volumetric editing) and pre depth processing + fragment discarding) for cutting of parts of rendered image using masks or something other (like cutting mesh using plane + discard) and not wasting performance on useless fragment calls. Also with setting to ignore mesh rendering mask (fullscreen effect flag), so we could have bloom (aka glow) without cutouts from the mesh

and 1 post processing shader resource (if there are no plans on modifying the rendering pipeline):

  • add new screen texture hint, when all nodes are rendered except those with that post processing
  • if this shader type doesnt relies on data from screen (screen hint textures and etc), it could be optimized for low end devices, providing effects without actually drawing triangle that covers the screen with another fragment pass.
  • Note that varying(or idk if there will be other keyword for that) variables should be avalable in next shader pass. (like passing variable from prepass to material, from material to post process)

This node will "add" another next pass shader to each child nodes (or to specific nodes with its parents (might be specified in the node itself) (subject to change/review)), so it should be much more flexible in terms of applying different effects for different things on the screen. That also includes node that specializes in background rendering (currently its nodes that are uses Environment. just saying this because there quite a lot of features that should not be managed there)

That might probably allow to modify node rendering properities, and add like transparency, or cutting rendered mesh using masks or discard shader. That also solves the problem with transparency, because post processing literally modify the end material of the object, instead of actually doing 1 more layer of overdraw.

and also im looking forward to change the way of how skyboxes shaders are working so we could apply skyboxes textures to other meshes/things (even to post processing) for fake skyboxes like in source games.

Edit: after looking into my suggestions, i think its better to approach shader typing like godot does it with its objects (like we have some Main empty shader class, that have Sky, CanvasItem/2D, 3D, Prep, Post childern that are extending from it with their own uniform hints and also could be converted/extended into your custom shader class for editing, just like objects in godot)

the base hierarcy that I thought of:

  • Main
    • Prep (pre depth processing)
    • Post (post processting)
      • Environment (just has compilation of multiple shaders used for the same things as in Environment resource)
      • LightSettings (ambient and reflected light modifiers)
      • Tonemap
        • Linear
        • Reinhard
        • Filmic
        • ACES
      • Adjustments (contrast, brightness, saturation and color correction, tho I think to split it into different classes)
      • Screen Read (this shader reads from provided textures, that should be screen texture, but not limited to it)
        • SSR
        • SSAO
        • SSIL
        • Bloom/Glow
          • FFT based
          • Blur based (idk how its called)
        • Depfh Effects
          • DOF Blur
          • FogLinear (literally the cheapest fog that only rely on depth value)
          • FogRadial (radial fog, without complex shapes)
        • Volumetric (used for smoke or advanced volumetric fog)
    • CanvasItem/2D (transforms)
    • Sky
      • Box
      • Physical
      • Procedural
      • Cylinder
      • Panorama
      • Layered (for using skyboxes with transparency, so part of the skybox could be changed)
    • Backbuffer (used for Screen Reading shaders)
    • Frame (used for Screen Reading shaders)
    • Color
      • Unshaded
        • StandartMaterial (the one with different material rendering stuff)
    • 3D (transform)
      • MultiMesh
        • GPUParticle

Each shader will have "next pass" and "prep pass" that allow to construct desired shader using other shaders. this could be more flexible in terms of adding obscure functionality, like drawing 2D elements directly In 3D or the other way around (3D on 3D) without limitation of the SubViewport node.

The current system not only makes rendering process much more transparent to the end user, but also will makes "next pass" much more usefull than curently it is, and also solves the problem with post and pre process shaders, and the problems they are causing with transparency.

WorldEnvironment should be changed to WorldLight or LightSettings (because suggested implementation literally deconstructed Environment resource into multiple vertex shaders), that controls light settings within the engine.

P.S. Im aware thats literally changes the entire underlying rendering system, so Its probably gonna be released in like Godot 5.

@bertodelrio256

This comment was marked as off-topic.

@Zireael07

This comment was marked as off-topic.

@bertodelrio256

This comment was marked as off-topic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests