-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: renderer: implement GLSL sRGB support #1034
Conversation
Actually the code in There is also a less good and cheaper implementation using a gamma conversion instead of the correct sRGB calculation, which is less precise but would be good enough to use in low presets as it would be faster at the expense of correctness. We may add a cvar to select this cheap implementation in low presets. |
I forgot to mention that with this implementation, when testing Xonotic maps, the lights looked correct, but some colors were not as warm as I expected (but that's less bad than what tweaking |
How to test:
Because of a bug the legacy game levels are not loadable from the command like, neither from the console without tricking the engine to discover the game levels. To load the I use Xonotic as tests maps because all Xonotic stock maps use all those sRGB features, so they are perfect test beds for us. Some unrelated DarkPlaces/Xonotic features maybe missing or buggy in the Dæmon engine, but loading those Xonotic game levels is good enough to test the sRGB engine features. |
a26cfed
to
5cdd147
Compare
For debugging purpose, I added two cvars:
Also I don't know why, but it definitely looks like I still miss something else to properly render those Xonotic maps, the maps are more dark than what I expect, and I still see some things that are clearly wrong. |
I have no explanation for this, but when I load Xonotic maps, if after linearizing the lightmap I multiply it by |
Actually, if I'm right multiplying light by |
I believe we already store binaries somewhere, I never checked where though. Or you can do glGetProgramBinary. Note though that different GPUs and different drivers will produce different binaries. |
Yeah I saw that it's doing a bitshift in code, so it's multiplying by pow( 2, mapOverBrightBits ) essentially (with a check for overflow). |
Do you mean you have seen this in DarkPlaces? Would you be able to find me that DarkPlaces code? |
No, this one Daemon/src/engine/renderer/tr_bsp.cpp Lines 54 to 95 in bf333c6
|
I know this one. 🙂️ But I would like to know what Darkplaces does with game levels using sRGB lightmaps… |
I can't find anything in their repository for overbrightbits except a commit to remove them that mentions tweaking some other values manually instead, but I can't find any mentions of those either. So maybe they just compute everything in the same colorspace? |
d16e0d9
to
5c2c758
Compare
I have to rework some commits as some unwanted spacing changes went in the wrong commit, but when looking at the PR as a whole, it starts becoming ready. One may try this branch with those maps:
And: The ATCSHD map is a perfect example of what we can expect in term of lighting. Other maps may require some adjustment, like dimming the backlight of light emitting shaders that were tailored for an engine doing non-physically correct rendering. The default overBrightBits is This PR also brings a special cvar named The non-broken overBrightBits implementation is not used when rendering linearly-computed maps as those are expected to be always tested with a non-broken engine (either Darkplaces, either Dæmon with this code). |
5c2c758
to
4f8f103
Compare
Parpax map from Unvanquished 0.54.1, non-physical lighting, broken overBrightBits: Vega map from Unvanquished 0.54.1, non-physical lighting, non-broken overBrightBits: Vega map with this branch, physical lighting, non-broken overBrightBits: All those parpax screenshots are done with |
Even when we cannot recompile a map, we may at least fix the overBrightBits when this doesn't break something else. ATCS Zone Alpha, non-physical lights, with broken overBrightBits (like in Unvanquished 0.54.1): ATCS Zone Alpha (exact same build), non-physical lights, non-broken overBrightBits: Edit: Actually this looks close to the level shot and then I guess, the mapper's intention: |
0cd5e32
to
e0ff661
Compare
e0ff661
to
9ae20b9
Compare
9ae20b9
to
48d13d4
Compare
I have to report the code to the Material system, and to make sure colorspace translation is done with models too. The visual result of the maps is as expected. |
48d13d4
to
2616573
Compare
I've only looked at the code briefly, but as far as I understand |
2616573
to
0ab7b5f
Compare
I get something strange, if I linearize the lightgrid at loading time, I get something slightly different than if I do it in GLSL. I assume what I get with the GLSL implementation is right. Here with an sRGB build of ATCSHD (one should ignore the creep on those screenshots for now).
C++: GLSL:
C++: GLSL: |
The difference between the two implementations are: diff --git a/src/engine/renderer/glsl_source/lightMapping_fp.glsl b/src/engine/renderer/glsl_source/lightMapping_fp.glsl
index 7c549a187..f30d67c4e 100644
--- a/src/engine/renderer/glsl_source/lightMapping_fp.glsl
+++ b/src/engine/renderer/glsl_source/lightMapping_fp.glsl
@@ -176,6 +176,8 @@ void main()
ReadLightGrid(texture3D(u_LightGrid1, lightGridPos), lightFactor, ambientColor, lightColor);
// The light grid conversion from sRGB is done in C++ code when loading it.
+ convertFromSRGB(lightColor, linearizeLightMap);
+ convertFromSRGB(ambientColor, linearizeLightMap);
color.rgb = ambientColor * r_AmbientScale * diffuse.rgb;
#endif
diff --git a/src/engine/renderer/tr_bsp.cpp b/src/engine/renderer/tr_bsp.cpp
index 8b8f2f5f3..51f589643 100644
--- a/src/engine/renderer/tr_bsp.cpp
+++ b/src/engine/renderer/tr_bsp.cpp
@@ -4157,8 +4157,8 @@ void R_LoadLightGrid( lump_t *l )
if ( tr.worldLinearizeLightMap )
{
- ambientColor[ j ] = convertFromSRGB( ambientColor[ j ] );
- directedColor[ j ] = convertFromSRGB( directedColor[ j ] );
+// ambientColor[ j ] = convertFromSRGB( ambientColor[ j ] );
+// directedColor[ j ] = convertFromSRGB( directedColor[ j ] );
}
}
|
1e8743b
to
6bcfb76
Compare
…h color linearization
6c2951b
to
85ae75e
Compare
I close this to re-open it to have a clear comment history. Most of that thread was about full overbright that was an effort that was required by this sRGB effort. |
See continuation there: |
This is a GLSL implementation and it produces awful color banding, I assume this is caused by precision loss when computed data is passed from framebuffers to framebuffers.
Why a GLSL implementation?
We have to keep it the computed colors of a stage in linear space in order to have correct stage blending, but then we lose precision when converting to sRGB for display at the very end.
I know OpenGL provides native features for sRGB but I want a GLSL fallback to exist if:
About the second point, one problem we have is that when loading light styles, we may upload an image to the GPU before knowing it's a lightmap, meaning that when we know it's a lightmap, it's already too late to use OpenGL builtin features to declare the texture as sRGB. This may be fixed with a complex rewrite of our shader parsing code by not loading images on the fly but deferring all image loading once the stage is fully parsed.
I'm not against a native OpenGL sRGB implementation but I believe that should come later when we have the most generic implementation working.
About precision loss
I'm also not sure that using the builtin OpenGL sRGB features would fix the color banding produced here, as I expect the sRGB framebuffer to have the same precision.
I expect builtin OpenGL sRGB feature is that if you have a sRGB framebuffer and a linear image and a sRGB image properly declared, things are automatically converted for you in input and output.
But we blend multiple stages.
An engine that implements the same features like DarkPlaces doesn't support stage blending, so it's easier for them to implement those sRGB features, and maybe they just avoid the problems we are facing because they don't have the handle use cases that produce such errors.
For example, if implement the exact same GLSL code but contained in a single stage, in the
lightMap_fp.glsl
code, and use a game level that only display single-stage materials, I get no color banding at all.What those features are about?
About operating in linear space
To have a physically correct light computation, the linear light map should be blended over a linear diffuse map, the complete light mapping code (deluxe/normal/specular…) should compute in linear space with linear data.
Historically, all diffuse maps are in sRGB space because it is what you get when you edit a file in a painting tool. Any workflow based on tools like GIMP, Krita, MyPaint, PhotoShop, Paint, etc. produce a sRGB image because what is painted is observed by the artist using an human eye that has non-linear response to light.
This article by John Novak is very useful to read on such topic, if you feel you miss something when reading my text, please re-read it: https://blog.johnnovak.net/2016/09/21/what-every-coder-should-know-about-gamma/
Because of the same reason, the light values written in game level light entities are in sRGB space, as the NetRadiant color picker is observed by an human eye. By convention, all other lights values are expected to be in sRGB space as well like the values in light emitting materials, in a way they are the sames as the one you would select with the NetRadiant color picker for a light entity.
The sRGB curve is an approximation of that non-linear human eye response to light, so we can convert back to linear (for computation) and forth (for display).
So, to physically compute blend lights for a game, the light map computation at game level compilation time must convert diffuse maps and light values to linear space. Historically,
q3map2
was wrongly computing lights with diffuse maps and light values in sRGB space. This is still the default because of backward compatibility needs. However, q3map2 now provides the-sRGBtex
and-sRGBcolor
options to tell the light computation to convert the diffuse map and light color values to linear space before the actual computation. This fixes half of the problem. The produced light map is always assumed to be in linear space.Then, to physically blend lights in a game, the light map blending at game render time should blend linear diffuse maps with linear light maps. While light maps are always considered to be in linear space, the diffuse maps are not. Historically,
the Quake3 game engine mistakenly blended linear lightmaps with sRGB diffuse maps. This produced non-physical light rendering. Because that computation is wrong, games like Wolf:ET used an alternate broken computation in q3map2 that makes the problem less problemating when feeding the broken computation in engine with, as in this case, broken in a way plus broken in the other way is less broken. The
learnopengl.com
website explains that trick in the “Attenuation” section. But all of this is wrong, the Wolf:ET trick was just attempting to hide wrongness with wrongness. I'm not sure the article is totally right about the topic, but that part explains well what the Wolf:ET hack did: https://learnopengl.com/Advanced-Lighting/Gamma-CorrectionWhat this implementation does?
If the
-sRGBtex
and-sRGBcolor
q3map2 options were used at game level compilation time, the engine assumes the game level author knows what he is doing and actually wants a physically correct light computation, not only at game level compilation time, but also at game level render time. The engine then converts all diffuse maps in linear space before blending lights on them, but do not convert the result computation back to sRGB after that because the engine may blend other stages and we also want to do stage blending in linear space to have physically correct blends (like translucent textures acting like they would do in the real world). Then, at the very end, when everything is computed and blended and ready to be displayed on the screen, the whole screen is converted to sRGB so the human eye can see it.The banding observed when running this code is assumed to come from the fact the last conversion step can't obviously reconstruct the missing colors when converting back to sRGB.
About sRGB lightmaps
You may ask: why would people want to do that? I just said both diffuse maps and light maps should be in linear space when blending them to have a physically correct computation!
Yes, but the sRGB colorspace introduces a bias in the storage that gives more precision to color values the human eye is sensitive to. This means by storing a lightmap in a 8 bit image in sRGB color space, we can provide more precision in the dark areas, where the human eye is good at spotting differences, at the expense of bright areas, where differences are less noticeable by the human eye. If you discover this by reading these lines, please re-read the article by John Novak, especially the “Light emission vs perceptual brightness” and the “Physical vs perceptual linearity” sections: https://blog.johnnovak.net/2016/09/21/what-every-coder-should-know-about-gamma/#light-emission-vs-perceptual-brightness
Xonotic uses this trick: they ship light maps in sRGB space, and convert them back in linear space at game render time before blending them with diffuse maps, getting more precision in the low value while completely avoiding the usage of high-precision lightmaps, still using 8-bit per lightmap pixel and not 16-bit or other formats (they even ship light maps in jpeg format!). This also means such light maps with more precision in dark areas can still be stored in the light map lump of the BSP itself since the 8-bit per channel precision is kept. For Unvanquished we always recommend to do external lightmaps but one can truly recognize the cleverness of the trick, biasing the storage to get more precision in some range while keeping the compatibility with the storage format.
The ability to get better precision in darker areas while keeping the ability to store our lightmaps in 8-bit per channel formats would be good for us, as we usually use the 8-bit lossless WebP format to get a good light map compression.
The
q3map2
tool provide the-sRGBlight
option to convert light maps from linear space to sRGB space when writing them in the BSP lump or in external TGA images (we can then convert to 8-bit PNG, WebP, etc.).What this implementation does?
If the
-sRGBlight
q3map2 option was used at game level compilation time, the engine converts back the lightmap to linear space before blending it on the diffuse map and doing other operations like normal and specular mapping.About the
-sRGB
q3map2 option and othersThe
q3map2
map compiler provides a-sRGB
option that both sets-sRGBtex
,-sRGBcolor
and-sRGBlight
. It is an all-in-one option used by Xonotic to compile their maps to sets all those options.It is still good to have the ability to compute light maps in linear space but also store them in linear space. For example if one day our engine implements HDR lightmaps, we would not need to use the sRGB lightmap trick, and we would only want to tell q3map2 to do physically correct computations
-sRGBtex
and-sRGBcolor
, but store linear HDR lightmaps, then with using-sRGBlight
option.That's why I implemented a separate detection of “what computation the game level author used in q3map2 and then expects from the renderer” and “what lightmap storage the game level author used.
It may also be possible to do a legacy quake3-like non-physical light computation in q3map2 but still storing the lightmap in sRGB space to bias the storage to get more precision in the darker areas.
How to fix that ugly color banding?
Because we heavily rely on material stage blending (terrain blending, light styles, special effects, etc.), we cannot fix such color banding by converting computed stages to sRGB at the end of the stage rendering without having non-physical stage blends. Maybe we can fix that color banding by using a higher-precision framebuffer, if that's possible.
One may investigate a native sRGB OpenGL implementation but I doubt this will fix our issue because it looks like our problems comes from the fact we lose precision between computing stages.
It's OK to have less good looking game levels in low presets (especially the lowest one), so we may have:
If one day we implement this using native OpenGL features, this would also be the fallback if the required features are missing.
Also some people may accept a less physically correct rendering if that's faster, so I guess the engine may also do that:
If we do that, we would add a cvar for enabling or not the higher-precision framebuffers.
About the legacy
mapOverBrightBits
trickTweaking the lights with
mapOverBrightBits
is making the lights non-linear, so when detecting the game level was compiled with physically correct light computation, this implementation also assumes the gamel level author doesn't want to use anymapOverBrightBits
, which would make everything easier as we would not require the game level author to add a specific worldspawn entity key to disablemapOverBrightBits
when using a physically correct game level: he would just use the physically correct map compilation option in build menu, and the engine would guess he also wants to disable mapOverBrightBits. This will allow our engine to still enablemapOverBrighBits
on legacy game levels and disable it on newer game levels without requiring to set anymapOverBrightBits
worldspawn entity key, neither on legacy maps (this is better for out-of-the box legacy support), neither on newer game levels (this is better to make the task of the game level editor simpler; and would prevent mistakes).