Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Frame accurate seeking of HTML5 MediaElement #4

Open
tidoust opened this issue Jun 11, 2018 · 90 comments
Open

Frame accurate seeking of HTML5 MediaElement #4

tidoust opened this issue Jun 11, 2018 · 90 comments
Labels
needs editor Topic could be summarized in an IG doc if someone is willing to lead the effort

Comments

@tidoust
Copy link
Member

tidoust commented Jun 11, 2018

I've heard a couple of companies point out that one of the problems that makes it hard (at least harder than it could be) to do post-production of videos in Web browsers is that there is no easy way to process media elements on a frame by frame basis, whereas that is the usual default in Non-Linear Editors (NLE).

The currentTime property takes a time, not a frame number or an SMPTE timecode. Conversion from/to times to/from frame numbers is doable but supposes one knows the framerate of the video, which is not exposed to Web applications (a generic NLE would thus not know about it). Plus that framerate may actually vary over time.

Also, internal rounding of time values may mean that one seeks to the end of the previous frame instead of the beginning of a specific video frame.

Digging around, I've found a number of discussions and issues around the topic, most notably:

  1. An long thread from 2011 on Frame accuracy / SMPTE, which led to improvements in the precision of seeks in browser implementations:
    https://lists.w3.org/Archives/Public/public-whatwg-archive/2011Jan/0120.html
  2. A list of use cases from 2012 for seeking to specific frames. Not sure if these use cases remain relevant today:
    https://www.w3.org/Bugs/Public/show_bug.cgi?id=22678
  3. A question from 2013 on whether there was interest to expose "versions of currentTime, fastSeek(), duration, and the TimeRanges accessors, in frames, for video data":
    https://www.w3.org/Bugs/Public/show_bug.cgi?id=8278#c3
  4. A proposal from 2016 to add a rational time value for seek() to solve rounding issues (still open as of June 2018):
    Media elements should support a rational time value for seek() whatwg/html#609

There have probably been other discussions around the topic.

I'm raising this issue to collect practical use cases and requirements for the feature, and gauge interest from media companies to see a solution emerge. It would be good to precisely identify what does not work today, what minimal updates to media elements could solve the issue, and what these updates would imply from an implementation perspective.

@palemieux
Copy link
Contributor

There have probably been other discussions around the topic.

Yes. Similar discussions happened during the MSE project: https://www.w3.org/Bugs/Public/show_bug.cgi?id=19676

@chrisn
Copy link
Member

chrisn commented Jun 12, 2018

There's some interesting research here, with a survey of current browser behaviour.

The current lack of frame accuracy effectively closes off entire fields of possibilities from the web, such as non-linear video editing, but it also has unfortunate effects on things as simple as subtitle rendering.

@jpiesing
Copy link

I should also mention that there is some uncertainty about the precise meaning of currentTime - particularly when you have a media pipeline where the frame/sample coming out of the end may be 0.5s further along the media timeline than the ones entering the media pipeline. Some people think currentTime reflects what is coming out of the display/speakers/headphones. Some people think it should reflect the time were video and graphics are composited as this is easy to test and suits apps trying to sync graphics to video or audio. Simple implementations may re-use a time available in a media decoder.

@Daiz
Copy link

Daiz commented Jun 12, 2018

what minimal updates to media elements could solve the issue

Related to the matter of frame accuracy on the whole, one idea would be to add a new property to VideoElement called .currentFrameTime which would hold the presentation time value of the currently displayed frame. As mentioned in the research repository of mine (also linked above), .currentTime is not actually sufficient right now in any browser for determining the currently displayed frame even if you know the exact framerate of the video. .currentFrameTime could at least solve this particular issue, and could also be used for monitoring the exact screen refreshes when displayed frames change.

@jpiesing
Copy link

Related to the matter of frame accuracy on the whole, one idea would be to add a new property to VideoElement called .currentFrameTime which would hold the presentation time value of the currently displayed frame.

The currently displayed frame can be hard to determine, e.g. if the UA is running on a device without a display with video being output over HDMI or (perhaps) a remote playback scenario ( https://w3c.github.io/remote-playback/ ).

@markafoltz
Copy link

Remote playback cases are always going to be best effort to keep the video element in sync with the remote playback state. For video editing use cases, remote playback is not as relevant (except maybe to render the final output).

There are a number of implementation constraints that are going to make it challenging to provide a completely accurate instantaneous frame number or presentation timestamp in a modern browser during video playback.

  • The JS event loop will run in a different thread than the one painting pixels on the screen. There will be buffering and jitter in the intermediate thread hops.
  • The event loop often runs at a different frequency than the underlying video, so frames will span multiple loops.
  • Video is often decoded, painted, and composited asynchronously in hardware or software outside of the browser. There may not be frame-accurate feedback on the exact paint time of a frame.

Some estimates could be made based on knowing the latency of the downstream pipeline. It might be more useful to surface the last presentation timestamp submitted to the renderer and the estimated latency until frame paint.

It may also be more feasible to surface the final presentation timestamp/time code when a seek is completed. That seems more useful from a video editing use case.

Understanding the use cases here and what exactly you need know would help guide concrete feedback from browsers.

@Daiz
Copy link

Daiz commented Jun 12, 2018

One of the main use cases for me would be the ability to synchronize content changes outside video to frame changes in the video. As a simple example, the test case in the frame-accurate-ish repo shows this with the background color change. In my case the main thing would be the ability to accurate synchronize custom subtitle rendering with frame changes. Being even one or two screen refreshes off becomes a notable issue when you want to ensure subtitles appearing/disappearing with scene changes - even a frame or two of subtitles hanging on the screen after a scene change happens is very much notable and ugly to look at during playback.

@markafoltz
Copy link

It depends on the inputs to the custom subtitle rendering algorithm. How do you determine when to render a text cue?

@Daiz
Copy link

Daiz commented Jun 13, 2018

Currently, I'm using video.currentTime and doing calculations based on the frame rate to try to have cues appear/disappear when the displayed frame changes (which is the behavior I want to achieve). As mentioned before, this is not sufficient for frame-accurate rendering even if you know the exact frame rate of the video. There are ways to improve the accuracy with some non-standard properties (like video.mozPaintedFrames in Firefox), but even then the results aren't perfect.

@jpiesing
Copy link

It depends on the inputs to the custom subtitle rendering algorithm. How do you determine when to render a text cue?

Perhaps @palemieux could comment on how the imsc.js library handles this?

@jpiesing
Copy link

One of the main use cases for me would be the ability to synchronize content changes outside video to frame changes in the video. As a simple example, the test case in the frame-accurate-ish repo shows this with the background color change. In my case the main thing would be the ability to accurate synchronize custom subtitle rendering with frame changes. Being even one or two screen refreshes off becomes a notable issue when you want to ensure subtitles appearing/disappearing with scene changes - even a frame or two of subtitles hanging on the screen after a scene change happens is very much notable and ugly to look at during playback.

This highlights the importance of being clear what currentTime means as hardware-based implementations or devices outputting via HDMI may have several frames difference between the media time of the frame being output from the display and the frame being composited with graphics.

@ingararntzen
Copy link

With the timingsrc [1] library we are able to sync content changes outside the video with errors <10ms (less than a frame).

The library achieves this by

  1. using an interpolated clock approximating currentTime (timingobject)
  2. synchronizing video (mediasync) relative to a timing object (errors about 7ms)
  3. synchronizing javascript cues (sequencer - based on setTimeout) relative to the same timing object (errors about 1ms)

This still leaves delays from DOM changes to on-screen rendering.

In any case, this should typically be sub-framerate sync.

This assumes that currentTime is a good representation of the reality of video presentation. If it isn't, but you know how wrong it is, you can easily compensate.

Not sure if this is relevant to the original issue, which I understood to be about accurate frame stepping - not sync during playback?

Ingar Arntzen

[1] https://webtiming.github.io/timingsrc/

@nigelmegitt
Copy link
Contributor

how the imsc.js library handles this

@jpiesing I can't speak for @palemieux obviously but my understanding is that imsc.js does not play back video and therefore does not do any alignment; it merely identifies the times at which the presentation should change.

However it is integrated into the dash.js player which does need to synchronise the subtitle presentation with the media. I believe it uses Text Track Cues, and from what I've seen they can be up to 250ms late depending on when the Time Marches On algorithm happens to be run, which can be as infrequent as every 250ms, and in my experience often is.

As @Daiz points out, that's not nearly accurate enough.

@palemieux
Copy link
Contributor

What @nigelmegitt said :)

What is needed is a means of displaying/hiding HTML (or TTML) snippets at precise offsets on the media timeline.

@ingararntzen
Copy link

What is needed is a means of displaying/hiding HTML (or TTML) snippets at precise offsets on the media timeline.

@palemieux this is exactly what I described above.

The sequencer of the timingsrc library does this. It may be used with any data, including HTML or TTML.

@chrisn
Copy link
Member

chrisn commented Jun 13, 2018

Not sure if this is relevant to the original issue, which I understood to be about accurate frame stepping - not sync during playback?

@ingararntzen It is a different use case, but a good one nonetheless. Presumably, frame accurate time reporting would help with synchronised media playback across multiple devices, particularly where different browser engines are involved, each with a different pipeline delay. But, you say you're already achieving sub-frame rate sync in your library, based on currentTime, so maybe not?

@nigelmegitt
Copy link
Contributor

@ingararntzen forgive my lack of detailed knowledge, but the approach you describe does raise some questions at least in my mind:

  • does it change the event handling model so that it no longer uses Time Marches On?
  • What happens if the event handler for event n completes after event n+1 should begin execution?
  • Does the timing object synchronise against the video or does it cause the video to be synchronised with it? In other words, in the case of drift, what moves to get back into alignment?
  • How does the interpolating clock deal with non-linear movements along the media timeline in the video, such as pause, fast forward and rewind?

Just questions for my understanding, I'm not trying to be negative!

@Daiz
Copy link

Daiz commented Jun 13, 2018

On the matter of "sub-framerate sync", I would like to point out that for the purposes of high quality media playback, this is not enough. Things like subtitle scene bleeds (where a cue remains visible after a scene change occurs in the video) are noticeable and ugly even if they remain on-screen for just an extra 15-30 milliseconds (ie. less than a single 24FPS frame, which is ~42ms) after a scene change occurs. Again, you can clearly see this yourself with the background color change in this test case (which has various tricks applied to increase accuracy) - it is very clear when the sync is even slightly off. Desktop video playback software outside browsers do not have issues in this regard, and I would really like to be able to replicate that on the web as well.

@ingararntzen
Copy link

ingararntzen commented Jun 13, 2018

@nigelmegitt These are excellent questions, thank you 👍

does it change the event handling model so that it no longer uses Time Marches On?

yes. the sequencer is separate from the media element (which also means that you can use it for use cases where you don't have a media element). It takes direction from a timing object, which is basically just a thin wrapper around the system clock. The sequencer uses <setTimeout()> to schedule enter/exit events at the correct time.

What happens if the event handler for event n completes after event n+1 should begin execution?

Being run in the js environment, sequencer timeouts may be subject to delay if there are many other activities going on (just like any appcode). The sequencer guarantees the correct ordering, and will report how much it was delayed. It something like the sequencer was implemented by browsers natively, this situation could be improved further I suppose. The sequencer itself is light-weight, and you may use multiple for different data sources and/or different timing objects.

Does the timing object synchronise against the video or does it cause the video to be synchronised with it? In other words, in the case of drift, what moves to get back into alignment?

Excellent question! The model does not mandate one or the other. You may 1) continuously update the timing object from the currentTime, or 2) you may continuously monitor and adjust currentTime to match the timing object (e.g. using variable playbackrate).

Method 1) is fine if you only have one media element, you are doing sync only within one webpage, and you are ok with letting the media element be the master of whatever else you want to synchronize. In other scenarios you'll need method 2), for at least (N-1) synchronized things. We use method 1) only occasionally.

The timingsrc has a mediasync function for method 2) and a reversesync function for method 1) (...I think)

How does the interpolating clock deal with non-linear movements along the media timeline in the video, such as pause, fast forward and rewind?

The short answer: using mediasync or reversesync you don't have to think about that, it's all taken care of.

Some more details:
The mediasync library creates a interpolated clock internally as an approximation on currentTime. It can distinguish the natural increments and jitter of currentTime from hard changes by listening to events (i.e. seekTo, variableplaybackrate etc.)

@ingararntzen
Copy link

ingararntzen commented Jun 13, 2018

@chrisn

Presumably, frame accurate time reporting would help with synchronised media playback across multiple devices, particularly where different browser engines are involved, each with a different pipeline delay. But, you say you're already achieving sub-frame rate sync in your library, based on currentTime, so maybe not?

So, while the results are pretty good, there is no way to ensure that they are always that good (or that they will stay this good), unless these issues are put on the agenda through standardization work.

There are a number of ways to improve/simplify sync.

  • as you say, exposing accurate information on downstream delays, frame count, media offset is always a good thing.
  • currentTime values are also not timestamped, which means that you dont really know when it was sampled internally.
  • The jitter of currentTime is terrible.
  • Good sync depends on an interpolated clock. I guess this would also make it easier to convert back and forth between media offset and frame numbers.
  • there are also improvements seekTo and playbackrate which would improve things considerably

@nigelmegitt
Copy link
Contributor

you don't have to think about that

@ingararntzen in this forum we certainly do want to think about the details of how the thing works so we can assure ourselves that eventual users genuinely do not have to think about them. Having been "bitten" by the impact of timeupdate and Time Marches On we need to get it right next time!

@nigelmegitt
Copy link
Contributor

Having noted that Time Marches On can conformantly not be run frequently enough to meet subtitle and caption use cases, it does have a lot of other things going for it, like smooth handling of events that take too long to process.

In the spirit of making the smallest change possible to resolve it, here's an alternative proposal:

  • Change the minimum frequency to 50 times per second, instead of 4 times per second.

I would expect that to be enough to get frame accuracy at 25fps.

@ingararntzen
Copy link

@nigelmegitt - sure thing - I was more thinking of the end user here - not you guys :)

If you want me to go more into details that's ok too :)

@kevinmarks-b
Copy link

Assuming that framerates are uniform is going to go astray at some point, as mp4 can contain media with different rates.
The underlying structure has Movie time and Media time - the former is usually an arbitrary fraction, the latter a ratio specifically designed to represent the timescale of the actual samples, so for US-originated video this will be 1001/30000.

Walking through the media rates and getting fame times is going to give you glitches with longer files

If you want to construct an API like this I'd suggest mirroring what QuickTime did - this had 2 parts: the movie export API, which would give you callbacks for each frame rendered in sequence, telling you the media and movie times.
Or the GetNextInterestingTime() API which you could call iteratively and it would do the work of walking the movie, track edits and media to get you the next frame or keyframe.

Mozilla did make seekToNextFrame, but that was deprecated:
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/seekToNextFrame

@markafoltz
Copy link

@Diaz For your purposes, is it more important to have a frame counter, or an accurate currentTime?
What do you believe currentTime should represent?

@Daiz
Copy link

Daiz commented Jun 14, 2018

@mfoltzgoogle That depends - what exactly do you mean by a frame counter? As in, a value that would tell me the absolute frame number of the currently displayed frame, like if I have a 40000 frame long video with a constant frame rate of 23.976 FPS, and when currentTime is about 00:12:34.567 (754.567s), this hypothetical frame counter would have a value of 18091? This would most certainly work be useful for me.

To reiterate, for me the most important use case for frame accuracy right now would be to accurately snap subtitle cue changes to frame changes. A frame counter like described above would definitely work for this. Though since I personally work on premium VOD content where I'm in full control of the content pipeline, accurate currentTime (assuming that it means that with a constant frame rate / full frame rate information I would be able to reliably calculate the currently displayed frame number) would also work. But I think the kind of frame counter described above would be a better fit as more general purpose functionality.

@markafoltz
Copy link

markafoltz commented Jun 14, 2018

We would need to consider skipped frames, buffering states, splicing MSE buffers, and variable FPS video to nail down the algorithm to advance the "frame counter", but let's go with that as a straw-man. Say, adding a .frameCounter read-only property to <video>.

When you observe the .frameCounter for a <video> element, say in requestAnimationFrame, which frame would that correspond to?

@palemieux
Copy link
Contributor

@mfoltzgoogle Instead of a "frame counter", which is video-centric, I would consider adding a combination of timelineOffset and timelineRate, with timelineOffset being an integer and timelineRate a rational, i.e. two integers. The absolute offset (in seconds) is then given by timelineOffset divided by timelineRate. If timelineRate is set to the frame rate, then timelineOffset is equal to an offset in # of frames. This can be adapted to other kinds of essence that do not have "frames".

@Daiz
Copy link

Daiz commented Jun 15, 2018

When you observe the .frameCounter for a

For frame accuracy purposes, it should obviously correspond to the currently displayed frame on the screen.

Also, something that I wanted to say is I understand that there's a lot of additional complexity to this subject under various playback scenarios and that it's probably not possible to guarantee frame accuracy under all scenarios. However, I don't think should stop us from pursuing frame accuracy where it would indeed be possible. Like if I have just a normal browser window in full control of video playback playing video on a normal screen attached to my computer, even having frame accuracy just there alone would be a huge win in my books.

@nigelmegitt
Copy link
Contributor

The underlying structure has Movie time and Media time - the former is usually an arbitrary fraction, the latter a ratio specifically designed to represent the timescale of the actual samples, so for US-originated video this will be 1001/30000.

@kevinmarks-b "media time" is also used elsewhere as a generic term for "the timeline related to the media", independently of the syntax used, i.e. it can be expressed as an arbitrary fraction or a number of frames etc, for example in TTML.

@Laurian
Copy link

Laurian commented May 14, 2020

I've seen this issue with MP3 before and it is always with the VBR ones, CBR worked fine (but most MP3 are VBR)

mpck adhere_demo_audio.mp3
SUMMARY: adhere_demo_audio.mp3
    version                       MPEG v2.0
    layer                         3
    average bitrate               59527 bps (VBR)
    samplerate                    22050 Hz
    frames                        2417
    time                          1:03.137
    unidentified                  0 b (0%)
    errors                        none
    result                        Ok

@nigelmegitt
Copy link
Contributor

Thanks for the extra analysis @Laurian . I suspect you're right that MP3 is a particular offender, but we should not focus on one format specifically, but on the more general problem that for some media encodings it can be difficult to seek accurately, and look for a solution that might work more widely.

Typically I think implementers have gone down the route of finding some detailed specifications of media types that work for their particular application. In the web context it seems to me that we need something that would work widely. The two approaches I can think of so far that might work are:

  1. Categorise the available media types as "accurately seekable" and "not accurately seekable" and have something help scripts discover which one they have at runtime, depending on UA capabilities, so they can take some appropriate action.
  2. Add a new interface that requests UAs to pre-process media in advance in preparation for accurate seeking, even if that is a costly operation. This seems better to me than an API for "no really please do seek accurately to this time" because that would have an arbitrary performance penalty that would be hard to predict, so not great for editing applications if performance is desirable.

@chrisn
Copy link
Member

chrisn commented May 14, 2020

Nigel, I'm not seeing the difference in the demo you shared. With either MP3 or WAV selected, playback starts at time zero. I must be doing something wrong..?

@nigelmegitt
Copy link
Contributor

@chrisn listen to the audio description clips as they play back - the words you hear should match the text that shows under the video area, but they don't, especially for the MP3 version.

@giuliogatto
Copy link

@nigelmegitt good work! I can't find the BBC Adhere repo anymore.. was it moved or removed?

@nigelmegitt
Copy link
Contributor

@giuliogatto unfortunately the repo itself is still not open - we're tidying some bits up before making it open source, so please bear with us. It's taking us a while to get around to alongside other priorities 😔

@giuliogatto
Copy link

@nigelmegitt ok thanks! Keep up the good work!

@1c7
Copy link

1c7 commented Jul 15, 2020

@Daiz I saw a new methods here: https://stackoverflow.com/questions/60645390/nodejs-ffmpeg-play-video-at-specific-time-and-stream-it-to-client

How

  1. Use ffmpeg to live stream local video
  2. Use Electron.js to display live stream video

Do you think it's possible to use this way to achive subtitle display (with near-perfect sync)?

I haven't experiment this myself,
so I am not sure if it work

I was thinking build this project: https://github.com/1c7/Subtitle-Timeline-Editor/blob/master/README-in-English.md

in Swift & OC & SwiftUI for mac-only desktop app
but seem live ffmpeg+electron.js live stream is somewhat possible too

@1c7
Copy link

1c7 commented Jul 17, 2020

One more possible way do to it. (for Desktop)

If building a desktop app with electron.js

node-mpv can be used to control a local version mpv

so load subtitle and display subtitle is doable (.ass is fine)
and edit it and then reload subtitle is also possible.
frame to frame playback with left arrow key and right arrow key is also possible.

Node.js code

const mpvAPI = require('node-mpv');
const mpv = new mpvAPI({},
	[
		"--autofit=50%", // initial windows size
	]);

mpv.start()
	.then(() => {
		// video
		return mpv.load('/Users/remote_edit/Documents/1111.mp4')
	})
	.then(() => {
		// subtitle
		return mpv.addSubtitles('/Users/remote_edit/Documents/1111.ass')
	})
	.then(() => {
		return mpv
	})
	// this catches every arror from above
	.catch((error) => {
		console.log(error);
	});


// This will bind this function to the stopped event
mpv.on('stopped', () => {
	console.log("Your favorite song just finished, let's start it again!");
	// mpv.loadFile('/path/to/your/favorite/song.mp3');
});

package.json

{
  "name": "test-mpv-node",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "node-mpv": "^2.0.0-beta.0"
  }
}

Conclusion

  • Electron.js + ffmpeg live stream seem possible
  • Electron.js + mpv(using node-mpv module) also possible

@nigelmegitt
Copy link
Contributor

@nigelmegitt ok thanks! Keep up the good work!

Apologies, forgot to update this thread: the library part of the Adhere project was moved to https://github.com/bbc/adhere-lib/ so that we could open it up.

@tobiasBora
Copy link

Just, to make it clear, if I do video.currentTime = frame * framerate, do I have a guarantee that the video will indeed seek to the appropriate frame? I understand that reading from currentTime is not realiable, but I would expect that writing to currentTime is. From my experience, doing video.currentTime = frame * framerate + 0.0001 seems to work quite reliably (not sure if the 0.0001 is needed), but I'd like to be sure I'm not missing subtle edge cases.

@chrisn
Copy link
Member

chrisn commented Mar 8, 2022

As a next step, I suggest that we summarise this thread into a short document that covers the use cases and current limitations. It should take into account what can be achieved using new APIs such as WebCodecs and requestVideoFrameCallback, and be based on practical experience.

This thread includes discussion of frame accurate seeking and frame accurate rendering of content, so I suggest that the document includes both, for completeness.

Is anyone interested in helping to do this? Specifically, we'd be looking for someone who could edit such a document.

@tobiasBora
Copy link

tobiasBora commented Mar 25, 2022

This would be really cool to have guarantees on how to reach a special frame. For instance, I was thinking that:

this.video.currentTime = (frame / this.framerate) + 0.00001;

was always reaching the accurate frame... But it turns out it's not! (at least not using Chromium 95.0) Sometimes, I need a larger value for the additional term, like at least on one frame, I needed to do:

this.video.currentTime = (frame / this.framerate) + 0.001;

(this appear to fail for me when trying to reach for instance the frame 1949 of a 24fps video) similarly, reading out the current time.

Edit: similarly, reading this.video.currentTime (even when paused using requestVideoFrameCallback) seems to be not frame accurate.

@tomasklaen
Copy link

It's way worse for me. I've made a 20 fps testing video, where seeking currentTime to 0.05 should display the 2nd frame, but I have to go all the way to 0.072 for it to finally flip.

This makes it impossible to implement frame accurate video cutting/editing tools, as the time ffmpeg needs to seek to a frame is always a lot different than what video element needs to display it, and trying to add or subtract these arbitrary numbers just feels like a different kind of footgun.

@bhack
Copy link

bhack commented May 2, 2024

What is the state of the art of this? Can this currently be achieved only with WEBCodecs API?

@mzur
Copy link

mzur commented May 2, 2024

Here is an approach that uses requestVideoFrameCallback() as a workaround to seek to the next/previous frame: https://github.com/angrycoding/requestVideoFrameCallback-prev-next

@bhack
Copy link

bhack commented May 2, 2024

Is that one really working?
Cause on https://web.dev/articles/requestvideoframecallback-rvfc:

Note: Unfortunately, the video element does not guarantee frame-accurate seeking. This has been an ongoing subject of discussion. The WebCodecs API allows for frame accurate applications.

@JonnyBurger
Copy link

The technique by @mzur leads to better accuracy, but in our experience doesn't lead to perfect results always either.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs editor Topic could be summarized in an IG doc if someone is willing to lead the effort
Projects
None yet
Development

No branches or pull requests