Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Standardize tone mapping applied when rendering HDR content to an SDR canvas #113

Open
ccameron-chromium opened this issue Nov 1, 2023 · 9 comments

Comments

@ccameron-chromium
Copy link
Collaborator

Suppose there exists HDR content either as a ITU-R BT.2100 video or ISO 22028-5 image. It has:

  • Rec2020 primaries
  • HLG or PQ transfer function
  • Optional metadata specifying
    • Content color volume
    • Content light level info
    • Mastering display color volume
    • Nominal diffuse white level

Suppose that that image is required to be converted to an SDR target, e.g, because it is being drawn (using drawImage) to a 2D SDR canvas.

In this process, a tone mapping from HDR to SDR must be performed. This issue is to standardize that tone mapping.

Note: This is a subset of the HTML issue Standardize rendering of PQ and HLG HDR image content. That issue attempts to render to a space defined by an HDR headroom. This issue is just to an SDR space.

Scope

Before considering the tone mapping to be performed, we should first establish a more limited scope for the problem.

  • The target of the tone mapping must be an SDR space with Rec2020 primaries and a linear transfer function, with where 0,0,0 is 'black' and 1,1,1 is 'white'
    • This is converting to pixels to be handled by standard (usually ICC based) SDR color management
    • This is not converting to nits to be displayed on an SDR display
  • The source of the tone mapping is luminance on the ITU-R BT.2100 reference display
    • Both HLG and PQ are converted to this display and then the same tone mapping is applied to both
  • Gamut mapping to a narrower gamut is explicitly not to be addressed
    • The browser is to handle gamut mapping the resulting Rec2020 image as it would handle any other Rec2020 SDR image
    • Some algorithms might have a gamut mapping-like step to get a result in Rec2020 RGB space
  • The tone mapping algorithm is to be a global tone map (not a local tone map)
  • The tone mapping algorithm is to be parameterized only by the HDR metadata
    • It is independent of monitor it is to be viewed on and the viewing environment

Test images

Some test images are available here:

Candidate Algorithms

Several of these candidates have been implemented here.

@simontWork
Copy link
Collaborator

simontWork commented Nov 2, 2023

As well as natural images can we please include some generated images with a range of colours and sweeps to ensure that candidate solutions can be evaluated in terms of colour accuracy, discontinuities etc.

Examples could include the TruVision LUT stress test image, Macbeth colours, colour bars etc. It's also important that any solution correctly displays memory colours such as the entire range of skin tone, sky, grass etc.

@Myndex
Copy link
Member

Myndex commented Nov 2, 2023

OpenEXR has a collection of HDR test images that may be useful for this context. There is an assortment of patterns as well as natural images, and images intended to stress test or break things.

https://openexr.com/en/latest/_test_images/index.html

@jonsneyers
Copy link

Why are only global tone mapping algorithms considered? Local tone mapping methods can often provide better results, and don't necessarily have to infeasible in terms of computational resources. At least for still images. For video, I assume local tone mapping is problematic not just due to computational cost but also due to the issue of consistency between frames.

Just like for color gamut mapping, I don't think there's a way to standardize a single tone mapping method that will give satisfactory results in all cases. For gamut mapping, there's the concept of rendering intent (which can be signaled e.g. in an ICC profile) which offers some amount of artistic control over the way the gamut mapping is done. Of the different options for rendering intent, some are defined normatively, while others only have a general description but no fully defined normative definition.

For tone mapping, I think it could make sense to follow a similar approach. That is, a "tone mapping intent" could be signaled (as part of the image/video color metadata), which could be a field offering some options that are normatively defined and some other options that are vendor-specific / only informatively defined. Some of these options could also have one or two parameters, e.g. to adjust the shape of a global curve or to adjust a trade-off between preserving saturation vs preserving luminance.

Such a "tone mapping intent" field would be useful not just for rendering HDR content on an SDR canvas, but also for the rendering of HDR content on HDR displays with less headroom than needed.

For example, this field could have three possible values:

  1. Global tone mapping according to ITU-R BT.2408 Annex 5, YRGB followed by a gamut mapping step, with a parameter to adjust the balance between preserving saturation and preserving luminance (e.g. as implemented in the tone_maptool at https://github.com/libjxl/libjxl/tree/main/tools/hdr).
  2. Local tone mapping according to some fully specified algorithm, possibly with some parameters to adjust some parameters of the algorithm. This would have to be an algorithm that can be implemented with reasonably low computational cost.
  3. Vendor-specific "perceptual" tone mapping that is not normatively defined, can be implemented as one of the above or as anything else that aims at "pleasing and aesthetically similar" color reproduction (to borrow the phrasing from the perceptual rendering intent in ICC profiles).

This field could be signaled somehow (a new tag in ICC profiles, for example). It would be left to the image author to decide which tone mapping intent to use. Options 1 and 2 would have the benefit of being fully specified, so the result will be consistent across implementations and can be simulated during authoring. The parameters of the global/local tone mapping would offer some artistic control — not enough to do fully custom tone mapping, but likely enough to get satisfactory results in nearly all cases. Option 3 would not be fully specified, but could be a good choice for "don't care" cases where the author only really cares about the HDR content and not so much about how it gets rendered in SDR. While the results would not be consistent across implementations and not predictable, it would allow implementers to use potentially 'better' tone mapping methods (e.g. dynamic tone mapping that also takes things like changes in background illumination into account, or fancy proprietary local tone mapping methods).

There should also be a default value to be used in case this field is not signaled. The default could e.g. be option 1 (global tone mapping) with the balance between saturation-preservation and luminance-preservation set to some halfway point.

@simontWork
Copy link
Collaborator

simontWork commented Jul 23, 2024

Why are only global tone mapping algorithms considered? Local tone mapping methods can often provide better results, and don't necessarily have to infeasible in terms of computational resources. At least for still images. For video, I assume local tone mapping is problematic not just due to computational cost but also due to the issue of consistency between frames.

To get the best tone mapping for video, you may want a static mapping in some parts of the video and dynamic in others. For example, take a sports video with live video in the background and a score overlay in the foreground. You don't want the changing video background to noticeably change the level of the graphic, nor do you want the level to jump at a shot change. For more, see: https://tech.ebu.ch/files/live/sites/tech/files/shared/techreports/tr078.pdf

As an example of tone mapping, I've created a demo using the HLG OETF as the tone mapper. I've also shown how different HDR formats can be tone-mapped via HLG: https://bbc.github.io/w3c-tone-mapping-demo/ (Please note that I haven't added the metadata to either the image or the HTML to correctly identify the format of each image)

@Myndex
Copy link
Member

Myndex commented Jul 23, 2024

“Ideally” some form(s) of metadata would be implemented to facilitate tone mapping/gamut mapping from one space to another.

Example: as part of the deliverables to the DI for color grading, we often provide not only the shot, but also one or more alpha channels to facilitate grading independent areas of the image. In the example @simontWork mentions, we’d have an alpha for the graphic to use to select or deselect the graphic.

And when we consider the problems inherent in keeping text readable (and non-fatiguing), providing a means to provide an alpha for tone mapping purposes only to apply to graphics & text, would be ideal.

Extending this idea further, an alpha to indicate bright pixels to be clipped vs bright pixels to be soft-clipped and rolled into the midrange.

For the purposes here, this could be done in a single alpha channel, using specific levels to designate the purpose/intent of the underlying pixels.

In other words, the alpha would provide hints to a mapping algorithm as to how a given pixel or area should be mapped.

As such, we shouldn’t call it the alpha channel but instead perhaps the “hinting channel” — formats such as EXR already have provisions for arbitrary channels. In this case, we’d need to be able to label/repurpose the alpha is the delivery file to facilitate this.

If it seems complicated, it might be useful to mention that a lot of care goes into color grading here in Hollywood, with separate passes being done for various release formats. That’s the best way to gamut map/tone map—under the control of a colorist.

@simontWork
Copy link
Collaborator

Metadata production in live production is very difficult, the video stream may pass through dozens of hardware devices e.g. video routers, vision mixers, audio embedders, contribution and distribution links, re-clocking amplifiers - any of which could strip out or alter metadata. Quite often to combat this, generic metadata is added at the last point in the chain or systems are designed to not need metadata.

I'm also slightly concerned that the current target seems to be well specced devices such as computers and phones, the tone mapping will also be used in much lower powered renderers too, e.g. television graphics planes.

@Myndex
Copy link
Member

Myndex commented Jul 25, 2024

Hi @simontWork

Yes, carrying metadata through the post-process is challenging, or even untenable, but that was not what I was suggesting per se.

I was thinking aloud regarding a potential delivery form, that would allow content to be better adjusted on the fly for a given space. I.e. a single stream with a meta channel that would hint for mapping to various display spaces, and intended as a delivery stream.

@jonsneyers
Copy link

A "hinting channel" would be quite similar to an inverse gain map (as what is being done in ISO/CD 21496-1), i.e. you encode an HDR image (at the max nits / headroom) and then you also embed explicit local tone mapping data which can be used to produce a custom SDR image but also (via logspace interpolation) intermediate renderings can be produced for less-capable HDR displays that don't have enough headroom.

This is useful, but in practice, I think in the bulk of the use cases there will be no detailed manually created local tone mapping data available, and tone mapping will be done algorithmically (using something as simple as a global soft-clipping curve or as advanced as a fancy local tone mapping algorithm).

For interoperability in general, and in particular for the web platform, I think the first priority is to avoid inconsistent behavior between various implementations (browsers / operating systems / ...). An image should look the same in all browsers and on all operating systems.

There can be room for various degrees of artistic control over what happens, but the most important thing is that whatever happens, all user agents do it the same way.

I would propose four degrees of artistic control:

  1. No tone mapping info whatsoever is signaled, e.g. there's just an image tagged to be in Rec.2100 PQ space but no other info is signaled. In this case, there should be some default tone mapping applied, preferably something that usually gives satisfactory results, but most importantly it should be something that is consistent across implementations.
  2. Some HDR metadata is available regarding content color volume / content light levels. This provides some additional parameters to guide whatever the default tone mapping of case 1 is doing.
  3. Tone mapping intent metadata is signaled (with some yet-to-be-defined new tag that can go in ICC profiles? or by repurposing the existing rendering intent field to have implications for not just gamut mapping but also tone mapping?), specifying some curve/algorithm/parameters that describe exactly what to do. It may also be an option to have some underspecified options here that allow implementation-defined behavior, but there should be at least a few fully specified options, in particular one of the options should be what happens in cases 1 and 2.
  4. An explicit local tone mapping is provided, by means of a gain map (in either direction). This is the option with the biggest signaling cost and also the one that offers the most control (basically as much as serving different images for SDR and HDR based on media queries).

I think the thing that would help the most in the short term is just having 1. That already would allow a web dev to preview how things will look on displays with various capabilities, so they can decide if it's good enough or not (and if not, there's always the option of using media queries to send different variants of the image).

@mdrejhon
Copy link

mdrejhon commented Sep 4, 2024

That already would allow a web dev to preview how things will look on displays with various capabilities, so they can decide if it's good enough or not

Hello! I'm a web dev using the early HDR CANVAS, the founder of Blur Busters, creator of TestUFO 2.0 HDR now public at https://beta.testufo.com (now fully public as of September 1st, 2024, it will be moved to the production site this fall -- so it will be viewed by millions soon)

I want to volunteer a little of dev time for y'all to creating more custom TestUFO tests for standards testing;


Eventually, I would like to have better discovery of what tonemapping is going on -- because I need to know what pixels are being sent by the GPU to the actual display itself.

As I work on the HDR version of TestUFO (brand new HDR test: https://beta.testufo.com/hdr) I have been wanting more direct access to whatever tonemapping is applied. Some tests are exact-color sensitive.

I am an expert on display motion behaviours, but more new to HDR/WCG, but would be happy to create a few custom HDR-related tests for standards groups (contact me privately) that runs through my existing HDR TestUFO engine -- that could help standardization needs. With TestUFO being the world's most popular motion testing website and display testing website, it has a role in popularizing HDR. I'd like to help.

I want to mention that I like the idea of knowing what the linear rec2100 lumens for pixels are for testing, since I can use a photodiode+oscilloscope to test HDR luminances more properly, by knowing what pixels are being sent by the browser to the GPU directly (bypassing tonemapping or discovering the tonemapping data needed to compute that). In order to measure display performance, to see how a display clips/clamps nits relative to the specified luminances of the pixels, etc (at reference settings). So that is hopefully a nice improvement that I hope to eventually have access to.

image

It will display canvas type (SDR, WCG, HDR) and colorspace (display-p3, rec2100-pq, rec2100-hlg) at bottom right corner:

Screenshot 2024-09-04 at 3 38 21 PM

You can play with motion quality of various HDR colors in various motion-blur tests like https://beta.testufo.com/blurtrail and https://beta.testufo.com/chase ... Or do some more interesting consumer-teaching tests or hobbyist tests like https://beta.testufo.com/ and local dimming test (with HDR to amplify local dimming artifacts) https://beta.testufo.com/localdimming .... There are over 40 tests selectable in the titlebar menu.

Some tests even lets you pick specific HDR colors, for testing. I can't use the HTML5 colorpicker which does not support WCG/HDR yet... so I've created a rudimentary HDR colorpicker that use RGBI sliders (While CMYK is better for colorpicker art stuff, I use RGB because I want to control the display RGB pixels directly -- as a matter of testing display behaviors.

How can I help y'all test HDR better? I'm willing to create some additional simple HDR tests for web standards testing, before introducing them to millions of people. Remember, I'm motion-first expert, and still relatively new to WCG/HDR, but a major "influencers of the influencers" in display-motion testing (500 content creators use my tests, including LTT, RTINGS, TomsHardware, etc), over 100M see content or data that was assisted by one of my testing inventions...

Also, I need to submit bugs to standards team, so I'm still trying to decide which github items to "invade" (apologies) -- for example, I get corruption that occurs when I try to switch between SDR and HDR canvas by repeating getContext in different color spaces. As a workaround I resort to delete-recreate the CANVAS element everytime I switch colorspaces in canvas via the new TestUFO settings gear icon. The standards documents do not specify what should happen if useragents want to switch CANVAS colorspaces, and is a gap in standards that create interesting glitching issues I've had to work around.

Various animations in the TestUFO engine is used for demos/testing/teaching/science/comparisions/entertainment -- so the scope includes tests that are of interest to standards groups.

If you want my volunteer coding help for HDR tests within the TestUFO engine, contact me at standards[at]blurbusters[dot].com

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants