Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for AVSampleBufferDisplayLayer format (iOS) #8910

Open
skrew opened this issue Jun 11, 2021 · 16 comments
Open

Support for AVSampleBufferDisplayLayer format (iOS) #8910

skrew opened this issue Jun 11, 2021 · 16 comments

Comments

@skrew
Copy link

skrew commented Jun 11, 2021

In #7857 @tmm1 said he working on adding AVSampleBufferDisplayLayer on iOS via --widoption.

I would like to replace my current GLES implementation with AVSampleBufferDisplayLayer, as OpenGL is deprecated for long time now by Apple.

It would be fine to get the good buffer format from libmpv to queue frames to CMSampleBuffer

What do you think ?

@alexiscn
Copy link

You can create CVPixelBufferRef from OpenGL, and render using AVSampleBufferDisplayLayer

Create CVPixelBufferRef from OpenGL

class RenderView: GLKView {
    var frameBuffer: SRFrameBuffer?
    func drawRect() {
        // .... 
        let pixelBuffer = frameBuffer?.target {
            glFinish()
            renderDelegate?.renderView(self, dispatchCVPixelBuffer: pixelBuffer)
        }
    }
}

extension AVSampleBufferDisplayLayer {
    
    func process(_ pixelBuffer: CVPixelBuffer) {
        autoreleasepool {
            var timingInfo: CMSampleTimingInfo = .invalid
            var sampleBufferOut: CMSampleBuffer?
            
            var formatDescription: CMFormatDescription?
            CMVideoFormatDescriptionCreateForImageBuffer(allocator: kCFAllocatorDefault,
                                                         imageBuffer: pixelBuffer,
                                                         formatDescriptionOut: &formatDescription)
            
            if let formatDescription = formatDescription {
                CMSampleBufferCreateForImageBuffer(allocator: kCFAllocatorDefault,
                                                           imageBuffer: pixelBuffer,
                                                           dataReady: true,
                                                           makeDataReadyCallback: nil,
                                                           refcon: nil,
                                                           formatDescription: formatDescription,
                                                           sampleTiming: &timingInfo,
                                                           sampleBufferOut: &sampleBufferOut)

                if let sampleBufferOut = sampleBufferOut {
                    let attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBufferOut, createIfNecessary: true)
                    let attachment = unsafeBitCast(CFArrayGetValueAtIndex(attachments, 0), to: CFMutableDictionary.self)
                    CFDictionarySetValue(attachment,
                                         Unmanaged.passUnretained(kCMSampleAttachmentKey_DisplayImmediately).toOpaque(),
                                         Unmanaged.passUnretained(kCFBooleanTrue).toOpaque())
                    
                    self.enqueue(sampleBufferOut)
                    if self.status == .failed {
                        self.flush()
                        print("AVSampleBufferDisplayLayer enqueue status failed")
                        self.enqueue(sampleBufferOut)
                    }
                    if let error = self.error {
                        print("AVSampleBufferDisplayLayer enqueue error:\(error)")
                    }
                }
            }
        }
    }
}

@skrew
Copy link
Author

skrew commented Sep 27, 2022

Thanks for the tip, but i don't get it to works.

The pixelBuffer is always empty, i think i have a problem while binding the framebuffer and rendering from mpv_opengl_fbo / mpv_render_context_render 😔

@alexiscn
Copy link

fbo should using frameBuffer's fbo.

if let frameBuffer = frameBuffer {
    fboId = GLint(frameBuffer.frameBuffer)
    dimension = frameBuffer.size
    flip.pointee = 0
}

@skrew
Copy link
Author

skrew commented Sep 28, 2022

Yep i'm using fbo, but i'm missing something else (is my test ok ?)

Also mpv_render_context_render needs an OpaquePointer

override func draw(_ rect: CGRect) {
        EAGLContext.setCurrent(self.context)
        var flip: CInt = 1
        
        if let frameBuffer = frameBuffer {
            glGetIntegerv(GLenum(GL_DRAW_FRAMEBUFFER_BINDING), &frameBuffer.frameBuffer)

            withUnsafeMutablePointer(to: &flip) { flip in
                if HandlerManager.shared.activePlayer().mpvRenderContext != nil {
                    fbo = GLint(frameBuffer.frameBuffer)
                    
                    var data = mpv_opengl_fbo(fbo: Int32(fbo),
                                              w: Int32(self.drawableWidth),
                                              h: Int32(self.drawableHeight),
                                              internal_format: 0)
                    withUnsafeMutablePointer(to: &data) { data in
                        var params: [mpv_render_param] = [
                            mpv_render_param(type: MPV_RENDER_PARAM_OPENGL_FBO, data: .init(data)),
                            mpv_render_param(type: MPV_RENDER_PARAM_FLIP_Y, data: .init(flip)),
                            mpv_render_param()
                        ]
                        mpv_render_context_render(HandlerManager.shared.activePlayer().mpvRenderContext, &params)
                    }
                } else {
                    glClearColor(0, 0, 0, 1)
                }
            }
        }
        
        glFlush()
        glBindFramebuffer(GLenum(GL_FRAMEBUFFER), 0)
        
        // Test if we have something in target ?
        if let pixelBuffer = frameBuffer?.target {
            glFinish()
            //renderDelegate?.renderView(self, dispatchCVPixelBuffer: pixelBuffer)
            
            var cgImage: CGImage?
            VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)
            let image1 = UIImage.init(cgImage: cgImage!)
            
            let image2 = UIImage(ciImage: CIImage(cvPixelBuffer: pixelBuffer))
            
            let image3 = pixelBuffer.cgImage()?.image()

            print("image empty ?", image1, image2, image3)
        }
    }

@tmm1
Copy link
Contributor

tmm1 commented Oct 24, 2022

In #7857 @tmm1 said he working on adding AVSampleBufferDisplayLayer on iOS via --widoption.

I dug this up and pushed it here: tmm1@d438818

@skrew
Copy link
Author

skrew commented Oct 25, 2022

Hi @tmm1 Thanks a lot !
I have build and test it, unfortunately i just have a black screen.

Watching logs, i see that:

[   0.541][e][autoconvert] can't find video conversion for yuv420p
[   0.541][f][vf] Cannot convert decoder/filter output to any format supported by the output.
[   0.541][v][vf] filter output EOF
[   0.541][t][cplayer] video_output_image: r=0/eof=1/st=syncing
[   0.541][f][cplayer] Could not initialize video chain.
[   0.541][d][vd] Uninit decoder.

Looks like vo avfoundation is uninit right after opening file.

Do you have any sample code or tips on how to use it ?

@tmm1
Copy link
Contributor

tmm1 commented Oct 25, 2022

It can only render CVPixBuf so I think you need hwdec=videotoolbox (with hwdec-codecs=h264 if that's what you're decoding). And perhaps also gpu-hwdec-interop=videotoolbox.

@skrew
Copy link
Author

skrew commented Oct 25, 2022

Tested with a h264 video, still failing

[   0.783][e][autoconvert] can't find video conversion for yuv420p
[   0.783][f][vf] Cannot convert decoder/filter output to any format supported by the output.
[   0.783][v][vf] filter output EOF
[   0.783][t][cplayer] video_output_image: r=0/eof=1/st=syncing
[   0.783][f][cplayer] Could not initialize video chain.
[   0.783][d][vd] Uninit decoder.

@tmm1
Copy link
Contributor

tmm1 commented Oct 25, 2022

Yea it won't work if the pixel format is yuv420p. You need the pixel format to be videotoolbox_vld. I can't tell why it's not decoding with videotoolbox, there should be more info in preceding[vd] lines.

Alternatively you can convert yuv420p into videotoolbox_vld using lavf hwupload. I added support for that in https://www.mail-archive.com/ffmpeg-devel@ffmpeg.org/msg121506.html

@skrew
Copy link
Author

skrew commented Oct 26, 2022

Yes it's works with the filter. Thanks!

@byMohamedali
Copy link

byMohamedali commented Feb 1, 2023

Hi i did everything you said @tmm1 , it's working but the video is laggy, the log doesn't display any error
I set that for filtering chkErr(mpv_set_option_string(mpv, "vf", "hwupload"))

Did you have this issue @skrew ?

Is there something i missing ? thank you

Screen.Recording.2023-02-01.at.7.55.15.PM.mov

@byMohamedali
Copy link

Here the logs

mpv log: [cplayer] debug: Setting option 'hwdec' = "videotoolbox" (flags = 0) -> 0
mpv log: [cplayer] debug: Setting option 'gpu-hwdec-interop' = "videotoolbox" (flags = 0) -> 0
mpv log: [cplayer] debug: Setting option 'vf' = "lavfi=[hwupload]" (flags = 0) -> 0
mpv log: [cplayer] debug: Setting option 'hwdec-codecs' = "all" (flags = 0) -> 0
mpv log: [cplayer] debug: Setting option 'wid' = 105553137269504 (flags = 0) -> 0
mpv log: [cplayer] debug: Run command: loadfile, flags=64, args=[url="/private/var/folders/xd/3bggzm3n25l43mj4wdqx7bsh0000gn/X/70E90570-4B68-53EA-A50C-EFD7B4493739/d/Wrapper/TracyPlayerWithoutKS.app/big_buck_bunny.mp4", flags="replace", options=""]
mpv log: [global] debug: config path: 'watch_later' -> '-'
mpv log: [file] v: Opening /private/var/folders/xd/3bggzm3n25l43mj4wdqx7bsh0000gn/X/70E90570-4B68-53EA-A50C-EFD7B4493739/d/Wrapper/TracyPlayerWithoutKS.app/big_buck_bunny.mp4
mpv log: [file] debug: resize stream to 131072 bytes, drop 0 bytes
mpv log: [file] debug: Stream opened successfully.
mpv log: [demux] v: Trying demuxers for level=normal.
mpv log: [demux] debug: Trying demuxer: disc (force-level: normal)
mpv log: [demux] debug: Trying demuxer: edl (force-level: normal)
mpv log: [demux] debug: Trying demuxer: cue (force-level: normal)
mpv log: [demux] debug: Trying demuxer: rawaudio (force-level: normal)
mpv log: [demux] debug: Trying demuxer: rawvideo (force-level: normal)
mpv log: [demux] debug: Trying demuxer: mkv (force-level: normal)
mpv log: [demux] debug: Trying demuxer: lavf (force-level: normal)
mpv log: [lavf] v: Found 'mov,mp4,m4a,3gp,3g2,mj2' at score=100 size=2048.
mpv log: [demux] v: Detected file format: mov,mp4,m4a,3gp,3g2,mj2 (libavformat)
mpv log: [cplayer] v: Opening done: /private/var/folders/xd/3bggzm3n25l43mj4wdqx7bsh0000gn/X/70E90570-4B68-53EA-A50C-EFD7B4493739/d/Wrapper/TracyPlayerWithoutKS.app/big_buck_bunny.mp4
mpv log: [find_files] v: Loading external files in /private/var/folders/xd/3bggzm3n25l43mj4wdqx7bsh0000gn/X/70E90570-4B68-53EA-A50C-EFD7B4493739/d/Wrapper/TracyPlayerWithoutKS.app/
mpv log: [lavf] v: select track 0
mpv log: [lavf] v: select track 1
mpv log: [cplayer] info:  (+) Video --vid=1 (*) (h264 640x360 23.962fps)
mpv log: [cplayer] info:  (+) Audio --aid=1 --alang=eng (*) (aac 2ch 22050Hz)
mpv log: [vd] v: Container reported FPS: 23.962060
mpv log: [vd] v: Codec list:
mpv log: [vd] v:     h264 - H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10
mpv log: [vd] v: Opening decoder h264
mpv log: [vd] v: Looking at hwdec h264-videotoolbox...
mpv log: [vd] v: Could not create device.
mpv log: [vd] v: No hardware decoding available for this codec.
mpv log: [vd] v: Using software decoding.
mpv log: [vd] v: Detected 10 logical cores.
mpv log: [vd] v: Requesting 11 threads for decoding.
mpv log: [vd] v: Selected codec: h264 (H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10)
mpv log: [user_filter_wrapper] v: Setting option 'graph' = 'hwupload' (flags = 0)
mpv log: [vf] v: User filter list:
mpv log: [vf] v:   lavfi (lavfi.00)
mpv log: [ad] v: Codec list:
mpv log: [ad] v:     aac - AAC (Advanced Audio Coding)
mpv log: [ad] v:     aac_fixed (aac) - AAC (Advanced Audio Coding)
mpv log: [ad] v:     aac_at (aac) - aac (AudioToolbox)
mpv log: [ad] v: Opening decoder aac
mpv log: [ad] v: Requesting 1 threads for decoding.
mpv log: [ad] v: Selected codec: aac (AAC (Advanced Audio Coding))
mpv log: [af] v: User filter list:
mpv log: [af] v:   (empty)
mpv log: [cplayer] v: Starting playback...
mpv log: [af] v: [in] 22050Hz stereo 2ch floatp
mpv log: [af] v: [userspeed] 22050Hz stereo 2ch floatp
mpv log: [af] v: [userspeed] (disabled)
mpv log: [af] v: [convert] 22050Hz stereo 2ch floatp
mpv log: [ao] v: Trying audio driver 'audiounit'
mpv log: [ao/audiounit] v: requested format: 22050 Hz, stereo channels, floatp
mpv log: [ao/audiounit] v: max channels: 2, requested: 2
mpv log: [ao/audiounit] v: AU channel layout tag: 0 (0)
mpv log: [ao/audiounit] v: channel map: 0: 1
mpv log: [ao/audiounit] v: channel map: 1: 2
mpv log: [ao/audiounit] v: using stereo output
mpv log: [ao/audiounit] v: using soft-buffer of 4410 samples.
mpv log: [cplayer] info: AO: [audiounit] 22050Hz stereo 2ch floatp
mpv log: [cplayer] v: AO: Description: AudioUnit (iOS)
mpv log: [af] v: [convert] (disabled)
mpv log: [af] v: [out] 22050Hz stereo 2ch floatp
mpv log: [ffmpeg/video] debug: h264: Reinit context to 640x368, pix_fmt: yuv420p
mpv log: [vd] debug: DR parameter change to 640x386 yuv420p align=64
mpv log: [vd] debug: Allocating new (host-cached) DR image...
mpv log: [vd] debug: ...failed..
mpv log: [vd] v: DR failed - disabling.
mpv log: [vd] v: Using software decoding.
mpv log: [vd] v: Decoder format: 640x360 [0:1] yuv420p bt.601/bt.601-525/bt.1886/limited/auto CL=uhd
mpv log: [vf] v: [in] 640x360 yuv420p bt.601/bt.601-525/bt.1886/limited/display SP=1.000000 CL=uhd
mpv log: [vf] v: [userdeint] 640x360 yuv420p bt.601/bt.601-525/bt.1886/limited/display SP=1.000000 CL=uhd
mpv log: [vf] v: [userdeint] (disabled)
mpv log: [vf] v: [lavfi] 640x360 yuv420p bt.601/bt.601-525/bt.1886/limited/display SP=1.000000 CL=uhd
mpv log: [ffmpeg] debug: mpv_src_in0: w:640 h:360 pixfmt:yuv420p tb:1/1000000 fr:288000/12019 sar:1/1
mpv log: [lavfi] v: Configuring hwdec_interop=avfoundation for filter graph: hwupload
mpv log: [lavfi] debug: Filter graph:
mpv log: [lavfi] debug:                                                    +-------------------+
mpv log: [lavfi] debug: mpv_src_in0:default--[640x360 1:1 yuv420p]--default| Parsed_hwupload_0 |default--[640x360 1:1 videotoolbox_vld]--mpv_sink_out0:default
mpv log: [lavfi] debug:                                                    |    (hwupload)     |
mpv log: [lavfi] debug:                                                    +-------------------+
mpv log: [lavfi] debug: 
mpv log: [lavfi] debug:                                                                   +---------------+
mpv log: [lavfi] debug: Parsed_hwupload_0:default--[640x360 1:1 videotoolbox_vld]--default| mpv_sink_out0 |
mpv log: [lavfi] debug:                                                                   | (buffersink)  |
mpv log: [lavfi] debug:                                                                   +---------------+
mpv log: [lavfi] debug: 
mpv log: [lavfi] debug: +-------------+
mpv log: [lavfi] debug: | mpv_src_in0 |default--[640x360 1:1 yuv420p]--Parsed_hwupload_0:default
mpv log: [lavfi] debug: |  (buffer)   |
mpv log: [lavfi] debug: +-------------+
mpv log: [lavfi] debug: 
mpv log: [lavfi] debug: 
mpv log: [vf] v: [autorotate] 640x360 videotoolbox[yuv420p] bt.601/bt.601-525/bt.1886/limited/display SP=1.000000 CL=uhd
mpv log: [vf] v: [autorotate] (disabled)
mpv log: [vf] v: [convert] 640x360 videotoolbox[yuv420p] bt.601/bt.601-525/bt.1886/limited/display SP=1.000000 CL=uhd
mpv log: [vf] v: [convert] (disabled)
mpv log: [vf] v: [out] 640x360 videotoolbox[yuv420p] bt.601/bt.601-525/bt.1886/limited/display SP=1.000000 CL=uhd
mpv log: [cplayer] info: VO: [avfoundation] 640x360 videotoolbox[yuv420p]
mpv log: [cplayer] v: VO: Description: AVFoundation AVSampleBufferDisplayLayer (macOS/iOS)
mpv log: [vo/avfoundation] v: reconfig to 640x360 videotoolbox[yuv420p] bt.601/bt.601-525/bt.1886/limited/display SP=1.000000 CL=uhd
mpv log: [cplayer] v: first video frame after restart shown
mpv log: [cplayer] v: audio ready
mpv log: [cplayer] debug: starting video playback
mpv log: [cplayer] v: starting audio playback
mpv log: [cplayer] v: playback restart complete @ 0.000000, audio=playing, video=playing

@alexiscn
Copy link

alexiscn commented Mar 28, 2023

I am also encountering the video laggy issue too when using vo=avfoundation powered by AVSampleBufferDisplayLayer when playing 4K HDR video.

I guess the reason cause video laggy is that sample buffer display immediately. The frame is displayed when decoded, that may cause frame not synced with actual presentation time.

CFDictionarySetValue(
       (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0),
       kCMSampleAttachmentKey_DisplayImmediately,
       kCFBooleanTrue
);

After I comment kCMSampleAttachmentKey_DisplayImmediately and control sample buffer presentation time using AVSampleBufferDisplayLayer 's controlTimebase, the laggy got eased but not smooth as CAMetalLayer(which also provided by @tmm1 , which has HDR color issue I have not figured out) nor vo=libmpv using OpenGLES.

// renderview.swift
let timebasePointer = UnsafeMutablePointer<CMTimebase?>.allocate(capacity: 1)
let status = CMTimebaseCreateWithSourceClock(allocator: kCFAllocatorDefault, sourceClock: CMClockGetHostTimeClock(), timebaseOut: timebasePointer)
displayLayer.controlTimebase = timebasePointer.pointee
if let controlTimeBase = displayLayer.controlTimebase, status == noErr {
    CMTimebaseSetTime(controlTimeBase, time: CMTime.zero)
    CMTimebaseSetRate(controlTimeBase, rate: 1)
}
// vo_avfoundation.m
CMTimebaseRef timebase = [p->displayLayer controlTimebase];
CMTime nowTime = CMTimebaseGetTime(timebase);
CVPixelBufferRef pixbuf = (CVPixelBufferRef)img->planes[3];
CMSampleTimingInfo info = {
        .presentationTimeStamp = nowTime,
        .duration = kCMTimeInvalid,
        .decodeTimeStamp = kCMTimeInvalid
};
...

// CFDictionarySetValue(
//     (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0),
//     kCMSampleAttachmentKey_DisplayImmediately,
//     kCFBooleanTrue
// );

@alexiscn
Copy link

alexiscn commented Apr 7, 2023

Updated:

After I put enqueueSampleBuffer staffs to draw_frame method and the laggy went away.

static void flip_page(struct vo *vo)
{
    struct priv *p = vo->priv;
    struct mp_image *img = p->next_image;

    if (!img)
        return;

    mp_image_unrefp(&p->next_image);
}

static void draw_frame(struct vo *vo, struct vo_frame *frame)
{
    struct priv *p = vo->priv;

    mp_image_t *mpi = NULL;
    if (!frame->redraw && !frame->repeat)
        mpi = mp_image_new_ref(frame->current);

    talloc_free(p->next_image);

    if (!mpi)
        return;

    CVPixelBufferRef pixbuf = (CVPixelBufferRef)mpi->planes[3];
    CMSampleTimingInfo info = {
        .presentationTimeStamp = kCMTimeInvalid,
        .duration = kCMTimeInvalid,
        .decodeTimeStamp = kCMTimeInvalid
    };

    CMSampleBufferRef buf = NULL;
    CMFormatDescriptionRef format = NULL;
    CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixbuf, &format);
    CMSampleBufferCreateReadyWithImageBuffer(
        NULL,
        pixbuf,
        format,
        &info,
        &buf
    );
    CFRelease(format);

    CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(buf, YES);
    CFDictionarySetValue(
        (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0),
        kCMSampleAttachmentKey_DisplayImmediately,
        kCFBooleanTrue
    );

    [p->displayLayer enqueueSampleBuffer:buf];

    CFRelease(buf);

    p->next_image = mpi;
    p->next_vo_pts = frame->pts;
}

PS: I do not know if it's the right place to enqueueSampleBuffer.

@alexiscn
Copy link

alexiscn commented Apr 8, 2023

Hi @byMohamedali do you have subtitle display issue with this vo ? External subtitles are not showing.

@byMohamedali
Copy link

Yes i have the same issue when using avfoundation, i'm investigating how we could solve that, please let me know if you find a solution :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants