Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extract a frame from video #5076

Closed
kanthicn1 opened this issue Nov 8, 2018 · 3 comments
Closed

Extract a frame from video #5076

kanthicn1 opened this issue Nov 8, 2018 · 3 comments
Assignees
Labels

Comments

@kanthicn1
Copy link

Hello,

I am trying to do some video frame level manipulation of DASH/HLS video. I have tried this on entire video files offline where I get the frames using ffmpeg. Now, I want to try this online (frame rate) with ExoPlayer. I have searched extensively on how I could accomplish this. All options seem to direct towards creating a custom renderer with Open GL, apply a OpenGL shader/filter on the frame.
But, for my application, I would like to have the raw frame, manipulate it beyond what shaders might allow and re-insert the modified frame for display. Is it even possible with ExoPlayer? Can we override the processOutputBuffer function in MediaCodecVideoRenderer to achieve this? Would this be performance suicide?

I would appreciate any help as I am trying to wrap my head around the android rendering pipeline. Thanks.

@andrewlewis andrewlewis self-assigned this Nov 8, 2018
@andrewlewis
Copy link
Collaborator

Is the idea to play video in real time, processing each frame on the CPU (rather than the GPU with GLES)? This isn't a use case we support currently in ExoPlayer so it's not really in the scope of this issue tracker.

I expect copying and processing the decoded frames is going to be slow and inefficient. You might be able to get away with it if the video is low resolution/frame rate and the device is fast! With MediaCodec you can use ByteBuffer mode (search the documentation here) to get the output. It's likely to be a bit of work to switch MediaCodecVideoRenderer to use that mode. Alternatively, if the video happens to be in the VP9 format you may find that using LibvpxVideoRenderer in the vp9 extension is a quick way to get up and running as it already has a mode where it outputs raw frames, so you can in principle just add whatever processing you want in there.

@kanthicn1
Copy link
Author

Thanks a lot for your reply @andrewlewis . If I understand this correctly, you are saying my two options are (assuming I am not using vp9)
(a) Use MediaCodecs APIs for getting the raw video frames, manipulate it and queue it back ; But this will be CPU intensive and may work only for low res videos or very high end hardware
(b) Use the GLES to render my frames and somehow figure out how to encode what I want to do in the operations allowed in GLES?
(b) is more preferred way to do these things? Are there any projects that do snapchat like filters on live video and would you know if they use (a) or (b)?

I am sorry if my question is little all over the place and not well formed.

@andrewlewis
Copy link
Collaborator

Yeah, I think option (b) is preferable if it works for the kind of processing you want to do. There are certainly apps that use this approach (decoding to an off-screen SurfaceTexture and applying a GLES pixel shader to the decoder output).

@google google locked and limited conversation to collaborators May 16, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants