Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add NV12 image format support for flatbuffers! #920

Merged
merged 6 commits into from
Aug 13, 2024

Conversation

awawa-dev
Copy link
Owner

@awawa-dev awawa-dev commented Aug 6, 2024

Till recently HyperHDR flatbuffers server only supported raw RGB format. It had many limitation.
But thanks to @s1mptom for inspiration this PR adds NV12 image format to HyperHDR flatbuffers server (more discussion here: #916) For now you can use it with rather exotic and unsupported here webos capturing application but it is intended to work with any client and in the future it will be easy to extend it to other formats such as mjpeg

Advantages:

  • NV12 YUV conversion was completely moved to HyperHDR which uses LUT table for this purpose (zero calculations when converting YUV to RGB). Using selected LUT it can also perform YUV to RGB conversion and tone mapping in the same one step. This means we reduce the client load because in addition to reducing the processor load, it does not have to allocate additional resources, e.g. memory. You dont have to use external YUV libraries either.
  • less load on the connection bandwidth. NV12 frame is about 50% size of the RGB frame.
  • allows to fully utilize automatic LUT calibration with YUV coef detection

Important:

  • when NV12 transport mode is used HyperHDR needs standard full LUT table (150MB). The sometimes used small one (50MB) doesn't contain necessary YUV conversion tables and will work only for the RGB tone mapping.

My thought after receiving the sample frame (thanks again @s1mptom): when webos uses NV12 internally, as a consequence of using NV12 format there is only one color for every 4 brightness values. So even if webos capturing converts it like it does now to 320x240 RGB before sending the frame to HyperHDR, in reality we have only 160x120 colors and 320x240 pure brightness values blended together to achieve 320x240 RGB. Which can be important for dense LED strips, where you will need to use a higher resolution for a better effect.
But that's how internal webos capturing works so it's not a fault of the format since they chose it and we cant change it. However, we can abandon the unnecessary conversion from NV12 to RGB (which does not improve the quality at all and brings other problems) and send the original NV12 stream via flatbuffers now.

Some results (webOS, thanks s1mptom):

2024-08-13T17:33:54.531Z [CALIBRATOR] (LutCalibrator.cpp:936) Mean error for FCC is: 9520.889295
2024-08-13T17:33:54.531Z [CALIBRATOR] (LutCalibrator.cpp:936) Mean error for REC.709 is: 7063.289823
2024-08-13T17:33:54.531Z [CALIBRATOR] (LutCalibrator.cpp:936) Mean error for REC.601 is: 9462.383205

YUV coefficients are switched during LUT calibration thanks to the native NV12 format. It was impossible for the RGB flatbuffers stream.

image
source

@awawa-dev awawa-dev merged commit 3168734 into master Aug 13, 2024
15 checks passed
@satgit62
Copy link

satgit62 commented Aug 14, 2024

@awawa-dev
Hello,
first, I would like to thank you for the continuous development of HyperHDR and HyperSerial/HyperSerialPico.
Also, a thanks to the user @s1mptom for already managing to investigate this on webOS.

Since they have included the new changes “Add NV12 image format support for flatbuffers!” in their master branch, I am asking them how to handle this now. Do I now have to allocate the entire LUT table (150 MB) to the flatbuffer, or can you simply perform a LUT calibration? (although this is not necessarily as easy under webOS as on the Windows PC or Raspberry Pi).
We have two LUTs on the webOS, a “flat_lut_lin_tables.3d” about 50 MB and the “lut_lin_tables.3d” about 150 MB.
I've already used the master branch to compile it for webOS.
This is working fine on my LG TVs so far, but I'm wondering if I need to change anything in the menus or if the NV12 will be applied automatically.
What should I look out for? I would like to test everything and then give you my feedback.
Thank you in advance.

@awawa-dev
Copy link
Owner Author

awawa-dev commented Aug 14, 2024

Hi @satgit62

Do I now have to allocate the entire LUT table (150 MB) to the flatbuffer, or can you simply perform a LUT calibration?

to be able to receive the NV12 stream, a full 150 MB LUT file is needed. However, you don't have to worry about resources because one 50 MB part is still loaded into memory, just like it was the case with the smaller LUTs like the one you are using now: 150 MB = three tables of 50 MB each, only one is used at a time, 1 is needed for RGB tone mapping (classic RGB flatbuffers stream) and 2 are for YUV and YUV & tone mapping (for NV12).

Do I now have to allocate the entire LUT table (150 MB) to the flatbuffer, or can you simply perform a LUT calibration? (although this is not necessarily as easy under webOS as on the Windows PC or Raspberry Pi).

Yes, best to perform a calibration. We are working with @s1mptom to provide an alternative method for the calibration process: instead of Windows PC and a browser, simple to play calibraton test movie on the source player. So it should be much simpler. There is still a lot of work needed and it is theoretically possible to calibrate Dolby Vision using such a file. However I have problems encoding such DV file (from slideshow boards) using ffmpeg which has recently supported this format and therefore there is very little information on how to do it, we could use some help here. So currently it's work in progress here: #896

This is working fine on my LG TVs so far, but I'm wondering if I need to change anything in the menus or if the NV12 will be applied automatically.

You need special webos grabber that will provide raw NV12 stream @s1mptom managed to implement & test it in his fork.

@s1mptom
Copy link

s1mptom commented Aug 14, 2024

@satgit62 At the moment, my implementation to hyperion-webos is quite simple and more of a “temporary” solution for debugging the overall mechanism. That’s why I haven’t shared this solution in Discord yet… I’ve implemented a “direct” image transfer in my fork, bypassing the converter for the libvtcapture backend for versions 5+ (since I personally have an LG G3). I’m not entirely satisfied with the auto-calibration results so far, but I’m still testing and checking things out =) @awawa-dev has been actively helping me with this.
You can check out my changes in this branch https://github.com/s1mptom/hyperion-webos/tree/debug_videostream

@awawa-dev awawa-dev deleted the flatbuffers_nv12_support branch November 3, 2024 17:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants