-
-
Notifications
You must be signed in to change notification settings - Fork 21.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Denoised lightmap does not blend with tiled meshes #82607
Comments
This looks like an issue with the source mesh UV2s. It is clear that both OIDN and JNLM aren't able to blend across the seams. |
I think this is more of a denoiser problem since the OIDN only have seams along the edges, while JNLM have problems with blending. I can find some modular assets to test with, if need be. |
I've done some testing, I'm confident the issue lies with the denoisers sampling pixels outside of relevant UV islands, which causes bleed from pixels belonging to other islands and unrendered black pixels. I recreated this scene on v4.3.dev with an emissive cube and separated floorplanes, and this is the result using JNLM: I'm unsure why I have so much lightbleeding in my test scene and the first image in this issue post doesn't. Without denoising this scene is seamless. JNLM has an effect similar to a large blur. I grabbed the non-denoised .exr (top) and denoised .exr (bottom) to compare, and you can see light bleeding over between islands: This hypothesis is also supported by the recognizable dark borders that show up at the edge of the UV2 islands when using OIDN: I assume it's attempting to preserve more detail and takes in account the high contrast between the unrendered pixels and the rendered pixels, causing some darkened pixels to bleed over into the lightmap creating a dark line at the edge. I'm unfamiliar with how lightmaps are denoised typically, and I'm sure there are open source points of reference to look at, but I can see two possible solutions for this. 1) The denoiser only samples pixels within the same UV2 island:This seems like the most straightforward route, though this could still result in seams through value differences between modular assets, which I assume would be very similar to the first image in this issue post. 2) The denoiser samples pixels based on the 3D world coordinates of the pixels:This approach may be very naive and have many issues I'm not foreseeing, but if doable it would not have issues with seams between different meshes. It would find appropriate pixels to sample from based on world-coordinate proximity and surface normals instead of based on pixel distance in the lightmap. The issue with solution #1 would not apply here, as it would be able to sample pixels that are relevant in 3D space belonging to a different UV2 island. |
I'm not sure I can add much here, this is a very typical issue that crops up with seams if you do not map the UV2 to be a contiguous island on most engines. It's a pretty common limitation in workflows with lightmaps about anywhere that implements them. I don't see how you could realistically solve this. You can't do a "world pixel position" search from the 2D lightmap to find samples to denoise with, you can only look at the pixels close to it in 2D. Any seams in the UV2 will translate to lightmap seams and denoising can't work across them. It's worth mentioning the higher res you go, the more you can usually brute-force the problem away... but it's easier to just model a contiguous plane to solve the problem. |
Thanks @DarioSamo for having a look, this makes a lot of sense. This motivated me to spend some time today to continue researching this to try to understand why I’ve been able to get clean lightbakes with modular environments in other engines in the past, it does seem like brute forcing is the solution. Firstly I’d like to address this statement though:
Unfortunately this doesn’t solve the underlying issue. Modular workflows are extremely prevalent (Even for indies) and for a good reason so. In the case of this demo scene, you’re right, it is easier to model this particular floor as one continuous plane, but this scene is not representative of the strengths and reasons why you may want to use modular assets. There is an abundance of reasons why a project might want to use or even only be feasible to do with a modular workflow. I also disagree with this being a fix that'd take place solely in documentation. I think there are viable steps still available to look into to improve the support for this. I do think it is a good idea to add documentation on which steps a user could take to ensure their modular environment get qualitative bake results. One thing that stands out to me, looking at baked GI on modular environments in other engines, is that seams between meshes in other engines are caused by overall value differences between meshes, and not due to light leaks from other lightmap islands. The islands generally have no inconsistencies within them, i.e. pixels towards the edges of the island are consistently lit compared to pixels towards the centre of the island. Here’s an example of that in Unreal Engine. This is consistent with my own experience in various other engines. This, to me at least, confirms that there are some steps we could take to improve light leaks on modular assets using low resolution lightmaps caused by JNLM. I’m not very familiar with the lightmapGI code, but it looks like the range of pixels that the denoiser takes into account is static. If I’m correct with it being controlled by denoise_params.spatial_bandwidth, then it always samples in a range of 5 pixels, regardless of padding and regardless of resolution. This would explain the following two issues:
To me it seems like increasing padding between islands could solve both these issues: It would allow the denoiser in high resolution lightmaps to sample from a further range, and would prevent the denoiser in low resolution lightmaps to sample outside of its own island. There could be an argument made here for exposing lightmap padding as a property to be controlled by the user. It would be great if the user could get to make the decision so it satisfies their needs or use case: lower padding for less memory usage and more optimal lightmap usage, with the drawback of worse smoothing, or more padding with better smoothing and more lightmap usage as a result. I made a second test scene, using a directional light instead of an emissive material. Baking the shadow to get a visual representation of lightmap resolution in my 3D scene, since there is no lightmap texel density checker. Here you can see with a lightmap resolution of 64x64 pixels per face, the scene shows light leaking and lighting inconsistencies between meshes: I found that I had to increase the lightmap resolution to x4 at least in order to resolve this: This confirms that brute forcing works, but Godots lightmapper workflow doesn't accommodate for this. This is a quick mockup of the results that would give. I've taken the 4x lightmap, dilated it further to provide more padding for downscaling, and downscaled it in gimp. This gives a clean and much more optimized result that seems to make modular environments with baked lighting viable: When it comes to those OIDN leaks, it seems that those never go beyond ~2 pixels. It could potentially be fixed if the lightmap is eroded 2 pixels before it gets dilated. If that's possible, that would probably be sufficient to entirely remove those unwanted dark outlines. |
I think I have found some pretty concrete steps we can take. Padding.I increased the padding distance to cover the denoisers "HALF_SEARCH_WINDOW" (10 pixels). Now it doesn't sample from pixels belonging to another island. This solved the light-leaking issues that occurred (more prominently) at lower lightmap resolutions, and gives a visual that's much more consistent with behavior I've seen from other lightmappers I've used in the past. I tested it at an extremely low lightmap resolution as well (floor pieces are 8x8 pixels), since those showed much more prominent light-leakage after denoising, and those also show a clear quality improvement: Denoiser settings and bake quality ray counts.The denoisers settings being entirely static are hurting the quality. At higher ray counts it is possible to decrease the half search window of the denoiser, since the pixels are converged much closer to ground-truth. This means it samples only from closer pixels preserving much more detail. A lower search window then also would allow reduced padding, utilising uv space better. This is that same low resolution lightmap without denoising: Especially at these low resolutions it should be possible to increase the lightmap quality enough so the pixels have enough samples to properly converge much closer to ground truth, you can see here that the noise is relatively limited and it's not far from reaching that point, however it seems like the bake quality ray counts are capped. This setting doesn't offer an increase in quality beyond 4096. The time it takes for the lightmapper to bake also does not change whether it's on a ray count of 4096 or 32768. This is strange because in the register_types.cpp of the lightmapper module the property hint range for these properties are set to Regardless, rendering at a ray count of 32768 offers the exact same result as the previous bake: Increasing the texel density so a floor is 32x32, you can see that without denoising and using a high ray count it gives an almost clean result, without the added cost of rendering a ton of extra pixels. Here are the same settings as the previous image but with denoising enabled, scrolling through some half-search-window ranges (With 10 being the current hardcoded value in master). The scene has a high enough ray count so a lower search window is justified: It is worth noting that it is not possible to achieve this by simply adjusting the denoiser strength slider. Keeping the 10 pixel search window and decreasing the strength, it introduces a notable banding-like effect around edges of lightmap islands. However, combining everything: increased padding to avoid leaks, higher ray count for better convergence, decreased search window (3 pixels for this one), and reduced denoiser strength (0.025 like the previous image), we are able to achieve this result, with no rendering at higher resolution and downscaling: A takeaway I'd like to share before concluding is that I think the default ray counts that are set up are too low. There is too much reliance on the denoiser to get a smooth result, but the rendered result itself is still too noisy. There is practically no use in using anything other than the ultra setting, and even with "ultra" I've consistently needed to up the ray counts to get clean results in simple scenes. Final comparisonThis is not an exact science, but after pretty vigorous testing I've found that these are the minimal requirements of settings to achieve clean bakes in master and after adjusting padding to take in account the half_search_window. Master:
Custom:
@atirut-w's test scene:Using a combination of everything I've mentioned, I've been able to reduce the seams between the meshes to a very reasonable amount. For a render time of 10 seconds, I'd say this is more than sufficient. ConclusionThe answer to creating seamless bakes for modular environments:
These improvements also mean qualitative improvements for non-modular scenes and the lightmappers quality in general. The nice thing about these changes is is that they don't require extensive new feature implementations (Except the downscaling I guess). |
Very nice. Do you have a PR for this? |
Not yet, these tests were done by adjusting the hardcoded values and recompiling. |
I have a proof of concept implementation here: https://github.com/Calinou/godot/tree/lightmapgi-add-downsampling @lander-vr Could you give it a try to see if it works for you? Note that it's built on top of
|
@Calinou much better result with downsampling. How much slow is it? |
On this simple scene, this increases bake times from 4 seconds to 7 seconds on a RTX 4090, since the ray count isn't adjusted to compensate for downsampling (which means 4x more rays are thrown per pixel). On a larger scene, you can usually expect bake times to increase by a factor between 3x and 4x. Note that #54679 will happen sooner with downsampling too until it's fixed, since the image is 2x larger on each axis while baking. |
Super glad to see you work on this @Calinou, great stuff! Just did some tests, and it works perfectly as I'd expect it would! I'm wondering if it would be possible to have this as a slider? I personally think a downsample slider should even replace the texel-scale slider in the lightmapGI node, and that the texel-scale slider is better suited for project settings:
A common artist workflow for baked lighting is first baking with less scaling for faster bake times so they can quickly get previews and fine-tune results. Then when committing on a final bake add a bunch of down-sampling to get more accuracy. The quality (noise-wise) of those bakes should stay consistent in my opinion, otherwise it impairs this workflow. The increase in bake times is fine I think, high quality bakes are known to take time. Automating settings for baked GI (or pathtraced rendering in general) is super tricky because scenes/environments vary so much. I'm afraid that changing parameters under the hood that are on the surface controlled by users can become a cause of frustration and confusion for users: i.e. results change when they haven't made any baking settings adjustments. |
It can be done as a float property, but values above 2.0 will be really slow (and run into #54679 even earlier), and values below 2.0 can be a bit blurry due to the nature of downscaling by a non-integer factor. Remember that the increase in bake times is exponential: a scale of 4.0 bakes 16 times as many pixels, while a scale of 2.0 is "only" 4 times as many pixels as scale 1.0. |
In that case, could it make sense to temporarily cap the property at 2.0, and once that issue is resolved increase the limit?
I think that's fine since lightmaps don't really carry much high-frequency detail, and if I'm not wrong this blurryness wouldn't extend further than a pixel. Lightmaps are kinda blurry by default so it would surprise me if that is even noticeable in the end result, I could do some tests on this.
I'm very aware of this because this is not unusual at all for baked GI (at least based on the many hours I've spent baking in Unreal and Unity). Ramping up the quality you'll see bake times skyrocket for less and less returns (The quality increase between x1 and x2 will be bigger than going from x2 to x4). In the end an archviz scene has different requirements than a mobile game, and a large environment will have different possibilities compared to a small environment. To be able to push the bakers quality, allowing higher values needs to be possible, even if that means much much longer baking times. |
Godot version
v4.2.dev.custom_build [0ca8542]
System information
Godot v4.2.dev (0ca8542) - Windows 10.0.22621 - Vulkan (Forward+) - dedicated AMD Radeon RX 6700 XT (Advanced Micro Devices, Inc.; 27.20.21034.37) - AMD Ryzen 7 5700X 8-Core Processor (16 Threads)
Issue description
I'm not quite sure how to describe the condition under which this issue happens, so I'll explain my MRP instead.
The scene is composed of a 3x3 set of plane meshes, and a single cube. When baking the scene, the lightmap does not appear to blend between the tiles correctly when denoiser is enabled:
This is most likely caused by the new JNLM denoiser, as the lightmap looks (mostly) correct when the denoiser is disabled:
As a sidenote, the parts of the lightmap under the cube seems to "slide out".
This might be easy to fix.:I tried negating the half-pixel offset made in #81872, but it slid the other way instead.
Steps to reproduce
See above/MRP.
Minimal reproduction project
lightmap_inconsistencies.zip
The text was updated successfully, but these errors were encountered: