-
Notifications
You must be signed in to change notification settings - Fork 463
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Support YUV CVPixelBuffers (MakeImageFromPlatformBuffer(..)
)
#2357
feat: Support YUV CVPixelBuffers (MakeImageFromPlatformBuffer(..)
)
#2357
Conversation
…ple PixelFormats (YUV) Currently it always returns RGB. In the future it might also support YUV
@wcandillon I have done benchmarks to compare RGB buffers (
As expected, YUV is more efficient than RGB in this case since we effectively delay the YUV -> RGB conversion to run at the latest point possible, which is in the graphics API (Metal) when finally flushing and rendering. Before, the YUV -> RGB conversion was done in a way that also made it accessible to the CPU (by AVFoundation/Camera), hence it was a bit slower (RGB is less efficient than YUV as a format, and even has an extra alpha channel) and used more memory. |
Btw., I am playing around a bit with Android right now and I think we can also accelerate this whole pipeline by using YUV. This is how we'd create a YUV EGLImageKHR: EGLint attrs[] = {
EGL_WIDTH, width,
EGL_HEIGHT, height,
EGL_LINUX_DRM_FOURCC_EXT, DRM_FORMAT_YUV420, // This needs to be set to an appropriate DRM format
// Plane 0
EGL_DMA_BUF_PLANE0_FD_EXT, fd0,
EGL_DMA_BUF_PLANE0_OFFSET_EXT, offset0,
EGL_DMA_BUF_PLANE0_PITCH_EXT, pitch0,
// Plane 1
EGL_DMA_BUF_PLANE1_FD_EXT, fd1,
EGL_DMA_BUF_PLANE1_OFFSET_EXT, offset1,
EGL_DMA_BUF_PLANE1_PITCH_EXT, pitch1,
// Plane 2
EGL_DMA_BUF_PLANE2_FD_EXT, fd2,
EGL_DMA_BUF_PLANE2_OFFSET_EXT, offset2,
EGL_DMA_BUF_PLANE2_PITCH_EXT, pitch2,
EGL_NONE
};
// ...
EGLImageKHR image = eglCreateImageKHR(eglDisplay, EGL_NO_CONTEXT, EGL_LINUX_DMA_BUF_EXT, clientBuffer, attrs);
// create gl textures per plane (Y + U + V)
GLuint textures[3]; // One for each plane
glGenTextures(3, textures);
for (int i = 0; i < 3; i++) {
glBindTexture(GL_TEXTURE_2D, textures[i]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, image); // Associate each plane's EGLImage
}
but for now I think we don't need that. We can maybe add this in the future if the performance boost is needed :) |
@wcandillon just updated this with main :) |
Okay just tested; no memory leaks 🎉 and textures are held strong until they are actually released by Skia: So we're good here now! PR is ready @wcandillon |
very cool 😎 |
🎉 This PR is included in version 1.2.2 🎉 The release is available on: Your semantic-release bot 📦🚀 |
Adds support for YUV-based CVPixelBuffers.
YUV CVPixelBuffers are often multi-plane (e.g. one Y plane and one UV plane, 8-bit 4:2:0 is the most common).
Apple suggests to avoid RGB and use YUV in Cameras when possible, as it requires significantly more overhead for conversion (see TN3121: Selecting a pixel format for an AVCaptureVideoDataOutput)
This just extends the
CVPixelBufferUtils
class, and adds one extra switch branch to themakeTextureFromCVPixelBuffer
function.I have confirmed that this works in VisionCamera when streaming
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
(the most common camera frame format) pixel buffers 👍You can test this by just changing the pixel format from 32BGRA to 420YpCbCr in your video example - compare the resources usage. Not sure if the difference is also as big in videos, but in cameras it definitely is because YUV is the native format:
Screen.Recording.2024-04-17.at.10.41.50.mov