-
-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a more powerful and customizable workflow for custom post-processing effects #2196
Comments
reduz suggested adding a FullScreenQuad node which would consist in a single triangle covering the whole screen. This would make post-processing effects easier to add while making them still easy to distribute on the asset library. (A single triangle covering the whole screen will be minutely faster than two triangles, at least on desktop platforms.) |
A fullscreenquad node that automatically covers the screen would be excellent - with my custom post-process motion blur, I sometimes see the effect 'lagging' behind ... |
Note: The FullScreenQuad idea would be sorted into the alpha pass of the regular render pass and still use a spatial material internally. Users would be responsible for setting the material to transparent and unshaded. Importantly, this runs before tonemapping and built in post processing. So it would be rather limited. It is essentially a built-in way of doing the post processing method described in the docs. The FullScreenQuad method would, however, be very simple to add and wouldn't require any changes to the renderer (i.e. it can just be added as a node). We also discussed something like this proposal earlier. i.e. a custom Overall, I am unhappy with either approach. I think in the long run, we need to redesign how we handle post-processing to better support making custom post-processing effects. |
mux and clayjohn had a short discussion about this on the Both built-in effects and user-created ones would be available as nodes and could be chained together in any order. |
Tangent: I was sure I had mentioned it, but currently post-process effects affect everything (e.g. gizmos) - we need a way to exclude some nodes/visual layers from them. |
This is being tracked in #2138. |
I hope that whatever system is decided on here doesn't make it unnecessarily complex or have unnecessary drawbacks. Background behind my opinion follows. Skip to the bottom to see what I actually have to say. One of my projects involves being able to load maps from Quake-like games, but also involves HDR lighting. Going ingame with the default tonemapping results in crushed, oversaturated colors, because the default tonemapping is just a clipping function (blue light added to help demonstrate the limitations of pure clipping): https://user-images.githubusercontent.com/585488/168461301-9d29c9d6-f931-47bc-83d2-c6b734cbace8.jpg Using ACES Fitted avoids this, but changes the overall lighting balance of the scene, because ACES fitted does a lot more than just desaturate very bright colors when they clip. This is undesirable, because this project is using maps from pre-existing games, and their lighting is no longer being replicated remotely faithfully: https://user-images.githubusercontent.com/585488/168461315-277497dc-2d2b-447a-b8bf-0c8de4e42679.jpg I wrote a custom tonemapping shader that "just" desaturates high-energy colors and it looks fine: https://user-images.githubusercontent.com/585488/168461347-2427e279-adcf-49e0-ab2f-45a250f06aa7.jpg (Side note: custom tonemapping curves would not help me here, only custom tonemapping cubes or custom tonemapping shaders.) Custom tonemapping shaders are not (yet?) supported, and the workaround necessary to make it work, using a viewport and a viewport container or viewport texture, has a lot of drawbacks (involving Controls, window scaling modes, etc). It also means that the lighting cannot be previewed in-editor accurately, because the custom tonemapping shader isn't part of the 'environment'. Salient point: if the system decided on here causes problems for Control nodes, or window viewport scaling, or has any of the other downsides that the viewport-based workaround has, it may end up being disused. Custom post-processing should be basically transparent to the rest of the development experience, including w/r/t interaction with other, unrelated features (like canvaslayers, window viewport scaling, etc). Approaches that are not basically transparent once they're set up should be scrutinized heavily to see if their tradeoffs are actually necessary. As such, I'm skeptical of the FullScreenQuad approach. |
As an aside, remember that id Tech 3 lightmaps are designed to be displayed with some kind of overbright management. This was done to compensate the lack of HDR lightmaps, since lightmaps were stored in a LDR format for performance reasons. This is controlled by the There's also the I've found that most id Tech 3 maps look subjectively better if you reduce That said, when using ACES tonemapping, this kind of tweak is probably counterproductive. It's worth keeping in mind if you intend on using linear tonemapping still (e.g. because of technical limitations or to maximize performance). On top of that, you may also want to add some constant ambient lighting to the whole scene. id Tech 3 doesn't have a built-in cvar for this, but I've found that it can help with areas of maps that are too dark (which occurs more often with the aforementioned tweaks). DarkPlaces has an You can probably simulate the above tricks in Godot by manipulating the lightmap data with the Image class before loading it. If performance is an issue, you can cache the lightmap data to disk once it's been manipulated.
Footnotes |
Definitely well-aware of the overbrightbits stuff. The main issue with ACES Fitted for me here, pushing me to post-processing tonemapping, is how it crushes dark areas in ways that the original engine does not (and as such, that the original map designers did not account for). If there was a way to use ACES Fitted without the weird stuff happening in the dark end of the tonemapping curve/curves, I might be able to use it, and not have to rely on post-processing. (rest of this post is a tangent that doesn't really have anything to do with this proposal, feel free to ignore) (Importantly, in quake 3, when using the mapoverbrightbits stuff on an oldschool computer setup with the original opengl1 renderer, you were expected to have a brighter-than-normal monitor, making up the lost brightness. It also worked differently in windowed mode than in fullscreen mode, i.e. not at all. So when using ioquake3's opengl2 renderer, or anything similar, where the design constraints are different, the exact settings you use are a bit touchy. See also: ioquake/ioq3#178. This is what I get with my current ioquake3 opengl2 settings, which doesn't lose out on overall brightness like your tweaked screenshot does: https://user-images.githubusercontent.com/585488/168485094-339e88d4-cd99-4ac2-a1cd-17b37be5c3b6.png) (As a side note, my earlier in-godot screenshots are using only the LDR part of the lightmaps, because the model conversion process I'm currently testing with clips off the HDR part of the lightmaps, with the sky-lit areas being lit up by a directionallight instead. Room for improvement, I might have to build tools for dumping HDR lightmaps manually if I can't find them, but unrelated to the proposal this thread is about.)
I'm, uh, actually doing witchcraft and loading the lightmaps as gray AO and then attempting to reintroduce the color with a second material pass in multiply blend mode. Still working on making it accurate (and might not be able to make it 100% accurate), but it means I don't (yet) have to interact with godot's lightmap system (which seems largely based around in-engine baking and I haven't figured out how to shove pre-existing lightmaps into it yet). (colored AO when?) (I know colored AO is super nonstandard but it would simplify importing models that have already had full, colored light simulation done to them) |
The below is a sketch of some rough ideas that will have to be developed further Reduz and I discussed post processing again today at the Godot sprint. We agreed on the following:
|
Not related to this proposal, but this makes me wonder if we could add shadows/midtones/highlights adjustments to Environment (as part of the adjustments checkbox). This would allow for more gradual adjustments of brightness compared to just adjusting the entire scene's brightness. For instance, to counteract ACES' overall darkening of the scene, you could set Shadows to 1.4, Midtones to 1.2 and Highlights to 1.0 (all values default to 1.0). I've seen other engines that support this out of the box, but I don't know how expensive this kind of filter is. |
Should this 'post process shader' be part of the environment resource or the camera effects resource? |
Right now the CameraEffects resource is more reserved for "true" camera effects. But it doesn't necessarily have to remain that way. In my opinion, the custom post processing should be implemented in Environment first, then an override can be added to CameraEffects if there is justification/demand. |
in terms of usability, that could also make for a better split between 2D and 3D, related: #4564 |
The workflow isn't the only thing that needs to be changed. The current system has overhead. It renders everything into a separate texture and then stacks it on the main one instead of directly rendering into an existing one. There has to be a way to tell a viewport to reuse an existing viewport buffer. I suggest the changes below : 1- A new object type called
2- Create a new material/shader type called "viewport shader" that has
3- add new material type
4- In project settings one should be allowed to select a viewport material for the default viewport. Benefits of this setup is : this is the best i could come up with. idk if it has any flaws. Just wanted to add that i don't think there can be a great workflow solution for post processing in godot 4. But rather something thats 'OK'. The backbone for the workflow upgrade should be put in before there is any workflow upgrade. |
I have been trying to implement various post processing effects in Godot 4 and have had a rather frustrating experience, particularly with more complex effects but even with simpler ones. I definitely think Godot needs a dedicated way to place post processing effects into the existing post processing stack. Both the ways that the Godot documentation suggests doing post processing come with crippling limitations:
An additional feature that would benefit a new post-processing system greatly, would be the ability to write to the buffers, just like the built in processes are able to (I imagine, though I might be wrong). These features would massively extend the capabilities of the renderer without needing to modify engine code, or over-complicate things for those why just want to use the built in effects. OK, thanks for reading my rant. I'm really enjoying Godot so far, just finding a few features lacking, particularly in the technical art department. |
for post processing I think its better to add 1 node:
and 1 post processing shader resource (if there are no plans on modifying the rendering pipeline):
This node will "add" another next pass shader to each child nodes (or to specific nodes with its parents (might be specified in the node itself) (subject to change/review)), so it should be much more flexible in terms of applying different effects for different things on the screen. That also includes node that specializes in background rendering (currently its nodes that are uses Environment. just saying this because there quite a lot of features that should not be managed there) That might probably allow to modify node rendering properities, and add like transparency, or cutting rendered mesh using masks or discard shader. That also solves the problem with transparency, because post processing literally modify the end material of the object, instead of actually doing 1 more layer of overdraw. and also im looking forward to change the way of how skyboxes shaders are working so we could apply skyboxes textures to other meshes/things (even to post processing) for fake skyboxes like in source games. Edit: after looking into my suggestions, i think its better to approach shader typing like godot does it with its objects (like we have some Main empty shader class, that have Sky, CanvasItem/2D, 3D, Prep, Post childern that are extending from it with their own uniform hints and also could be converted/extended into your custom shader class for editing, just like objects in godot) the base hierarcy that I thought of:
Each shader will have "next pass" and "prep pass" that allow to construct desired shader using other shaders. this could be more flexible in terms of adding obscure functionality, like drawing 2D elements directly In 3D or the other way around (3D on 3D) without limitation of the SubViewport node. The current system not only makes rendering process much more transparent to the end user, but also will makes "next pass" much more usefull than curently it is, and also solves the problem with post and pre process shaders, and the problems they are causing with transparency. WorldEnvironment should be changed to WorldLight or LightSettings (because suggested implementation literally deconstructed Environment resource into multiple vertex shaders), that controls light settings within the engine. P.S. Im aware thats literally changes the entire underlying rendering system, so Its probably gonna be released in like Godot 5. |
Describe the project you are working on
Various 3D projects
Describe the problem or limitation you are having in your project
The current workflow for adding custom post-processing cumbersome, cannot be previewed in editor camera and is limited to being applied after all built-in post-processing it also makes it complicated to set up game settings for the user to enable / disable custom post-processing effects. I assume it also ends up using new buffers for each effect added on, making it more performance expensive than if it could reuse buffers in the built-in post-processing stack.
Describe the feature / enhancement and how it helps to overcome the problem or limitation
I'd like the ability add custom post-processing in the WorldEnvironment node and choose which order it gets applied in the stack.
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
I could imagine an interface like this:
Where you could add a list of post-processing shaders and manage their shader parameters and decide when they were applied in the stack.
Possibly another shader_type would be added that had specific hooks for using the same buffers as the rest of the stack, if that is not possible with canvas_item.
If this enhancement will not be used often, can it be worked around with a few lines of script?
This could not easily be added with a few lines of code and would be used often.
Is there a reason why this should be core and not an add-on in the asset library?
This would have to be done in core.
The text was updated successfully, but these errors were encountered: