Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duration does not Update with HLS Live Streams #411

Open
softworkz opened this issue Feb 4, 2024 · 72 comments
Open

Duration does not Update with HLS Live Streams #411

softworkz opened this issue Feb 4, 2024 · 72 comments

Comments

@softworkz
Copy link
Collaborator

There are so many duration properties...

                var d1 = this.FfmpegMss.FormatInfo.Duration;
                var d2 = this.FfmpegMss.Duration;
                var d3 = this.FfmpegMss.GetMediaStreamSource().Duration;
                var d4 = this.FfmpegMss.PlaybackItem.StartTime;
                var d5 = this.FfmpegMss.PlaybackItem.Source.Duration;
                var d6 = this.FfmpegMss.PlaybackItem.Source.MediaStreamSource.Duration;
                var d7 = this.FfmpegMss.PlaybackSession?.NaturalDuration;
                var d8 = this.FfmpegMss.PlaybackSession?.MediaPlayer.TimelineController.Duration;

Unfortunately all are zero when using an HLS live stream (where the duration is continuously expanding).
Do you have any idea how to get accurate duration values in this case?

I haven't tried the Windows.Media adaptive streaming source, but then I would loose all FFmpegInteropX functionality, right?

@brabebhin
Copy link
Collaborator

brabebhin commented Feb 4, 2024

I haven't tried the Windows.Media adaptive streaming source, but then I would loose all FFmpegInteropX functionality, right?

it would, although recently we found a way to integrate with it, inspired by Microsoft (not implemented)

Does the stream actually play?
As far as I remember, we were bumping the duration property every now and again on live streams as we were decoding new samples.

@softworkz
Copy link
Collaborator Author

Does the stream actually play?

Yes, it plays fine and MPV player updates the duration nicely.

As far as I remember, we were bumping the duration property every now and again on live streams as we were decoding new samples.

Sounds good when it would happen.. ;-)

@lukasf
Copy link
Member

lukasf commented Feb 4, 2024

Not sure what you are trying to get @softworkz. A live stream by definition does not have a duration. I think you can get the current playback position from the MediaPlayer's PlaybackSession, if that is what you need.

@softworkz
Copy link
Collaborator Author

A live stream by definition does not have a duration

Well - philosophically rather not, but there is a duration, which is the total range of available segments in the playlist. For the simpler case where no older segments are dropping out, that duration is increasing by each added segment.

Duration is important so that you know (by displaying) within which ranges you are able to perform seeking, especially when you are presenting a timeline which is based on wall-clock time and/or bounded by chapters or program events/shows (TV).

@softworkz
Copy link
Collaborator Author

For illustration - it's about the blue range on the timeline.

image

@brabebhin
Copy link
Collaborator

I think the seekable ranges thing is a tad more complicated than that, I'd assume there's some buffering involved that allows seeking back.
The conundrum here I think is that MPE is no longer seekable if duration is 0.

@brabebhin
Copy link
Collaborator

Try the property AutoExtendDuration in configuration, this should turn on auto extending duration.

@softworkz
Copy link
Collaborator Author

I think the seekable ranges thing is a tad more complicated than that,

Not really. I mean it's not trivial to present it correctly, but from the player side, it's all about getting an up-to-date duration.

I'd assume there's some buffering involved that allows seeking back

HLS works with segments (like 3 seconds each), which allows to avoid doing excessive buffering. The player just loads the playlist (again and again and again...) to know about the streeam. It doesn't need to load any data.

The conundrum here I think is that MPE is no longer seekable if duration is 0

It is still seekable. But when you seek to a point outside of the valid range, then it hangs for 0.5-2s, which is another reason why this needs to be known.

Try the property AutoExtendDuration in configuration, this should turn on auto extending duration.

Oh thanks, sounds promising, I'll try!

Any other properties that might need to be set differently for live streams?

@softworkz
Copy link
Collaborator Author

AutoExtendDuration

Where is that?

@brabebhin
Copy link
Collaborator

In MediaSourceConfig, and if you use the winui branch, in the "General" section.

But by looking at the code, this should be true already, so it might not work.
Some other interesting properties are ReadAheadBufferEnabled, SkipErrors, ReadAheadBufferSize, ReadAheadBufferDuration. They are all in the sample config class.

@softworkz
Copy link
Collaborator Author

image

@brabebhin
Copy link
Collaborator

Ooups, it is not in the IDL.
Having to manually edit the IDL was a problem waiting to happen.

@softworkz
Copy link
Collaborator Author

The really annoyance of all that is not the IDL editing - it's that you need to always (when manually) need to make the same change at 5 different places and make no mistakes...I really don't like that, it just slows you down.

@brabebhin
Copy link
Collaborator

Yeah, this is why we resisted migrating to C++/winRT for as long as we could...

@brabebhin
Copy link
Collaborator

The curious thing is the property is in IDL on the winUI branch.

@softworkz
Copy link
Collaborator Author

I haven't picked it up. It was added after I had forked:

image

(yellow is mine, light blue is where it's been re-added)

@softworkz
Copy link
Collaborator Author

Damn - all for nothing. AutoExtendDuration is true by default, LOL

@brabebhin
Copy link
Collaborator

brabebhin commented Feb 4, 2024

Yeah, I feared as much.
This might be a bug. If you can provide to me a HLS test link, I'll look into it. This should be the scenario for that property in the first place.

The property has been in on master for a long time. It is even used in c++ code. Just not in IDL. I probably picked it up when i refactored the config.

Is that GitExtensions that you're using?

@softworkz
Copy link
Collaborator Author

Is that GitExtensions that you're using?

The screenshot? That's SmartGit.

@lukasf
Copy link
Member

lukasf commented Feb 9, 2024

AutoExtendDuration does not help you. It is only used in seekable streams which do have a duration, not in live streams. It's also a "stupid" solution, just extending the duration by 10 seconds each time playback goes over the end time. And as you found out, it's enabled by default.

I don't think that there is a way to get the required information from ffmpeg. The HLS/DASH support in ffmpeg is generally pretty poor. Very slow playback start and little control over what happens during playback. Also not enough information for seamless video stream switching. For full features HLS/DASH support, we'd need a custom stream parser, which would be quite a lot of work, due to the multitude of possibilities how these streams can be constructed.

@softworkz
Copy link
Collaborator Author

AutoExtendDuration does not help you.

Right, it doesn't.

But there must be a way, because MPV player uses the HLS demuxer from ffmpeg (I had added improved VTT subtitles support recently - for MPV).
The way MPV does it is to set the start of the playlist to zero (it does that for all playback by default) and then it extends the duration while the HLS live playlist grows.

Other players follow a different philosophy, by saying that a live stream cannot have a duration and set it to zero. Then they provide the playlist range in form of a different API. For Windows.Media, there's the GetSeekableRanges API (alongside GetBufferedRanges), which is the same way as HTMLVideoElement provides in browser engines.

It doesn't matter whether it's one way or the other, but it's crucial to get this information in some way, because without it, you cannot provide proper timeline display and seeking control in such streams.

@brabebhin
Copy link
Collaborator

I suppose the GetSeekableRanges API is controlled by

https://learn.microsoft.com/en-us/uwp/api/windows.media.core.mediastreamsource.setbufferedrange?view=winrt-22621#windows-media-core-mediastreamsource-setbufferedrange(windows-foundation-timespan-windows-foundation-timespan)

We could integrate our read-ahead buffer with this. However, the read ahead buffer only buffers ahead. In order to get some useful back seeking functionalities, we would need to also keep some back buffer.

@softworkz
Copy link
Collaborator Author

I suppose the GetSeekableRanges API is controlled by

No, that's for buffered ranges.

@brabebhin
Copy link
Collaborator

I'd hazard a guess and assume those are the same for live streams, but could be wrong.
Another way we can do this would be to implement the stream handler (like in MS did: microsoft/FFmpegInterop#305). This would allow us to indirectly feed into the AdaptiveMediaStreamSource, which theoretically should allow us to handle DASH using the windows.media APIs.

I have no idea if this would work, but supporting the byte stream handler shouldn't be too difficult. I also don't know how this would work with the various APIs that we expose through FFmpegMediaSource (like effects, subtitles).

@softworkz
Copy link
Collaborator Author

softworkz commented Feb 9, 2024

I'd hazard a guess and assume those are the same for live streams, but could be wrong.

To disambiguate the two:

Buffered Ranges

These are the time ranges for which content has been downloaded and can be played without further network (I/O) requests.

Seekable Ranges

Typically there's just a single such range. It indicates the time range for which content is available ("can be downloaded").

The specs - e.g. HLS - are provisioning for cases of discontinuities or interruptions, which could be reflected by more than a single "seekable range".

@softworkz
Copy link
Collaborator Author

Another way we can do this would be to implement the stream handler (like in MS did: microsoft/FFmpegInterop#305). This would allow us to indirectly feed into the AdaptiveMediaStreamSource, which theoretically should allow us to handle DASH using the windows.media APIs.

Yea, I had thought of that, but I'm not sure how easy/difficult that would be.

I think the least involved way would be to get this information somehow from ffmpeg. In the worst case, accessing the HLS demuxer directly, but MPV doesn't seem to do that. I haven't found out yet how they are determining the duration.
Maybe it's normally available from the demuxer and FFmpegInteropX is just not regularly reading and updating it?

@brabebhin
Copy link
Collaborator

It seems duration is exposed in AVStream and AVFormatContext. But I don't have any "live" URLs to check with. The URLs I found all report the right duration from the start.

@softworkz
Copy link
Collaborator Author

Here are some you can use: https://www.harryshomepage.de/webtv.html

@brabebhin
Copy link
Collaborator

Thanks. I'll look at this over the weekend.

@lukasf
Copy link
Member

lukasf commented Feb 10, 2024

Duration is not set in FFmpeg for live streams, and it would also be logically wrong to set a duration since there is no duration.

The only way how I currently see this supported is by using the ReadAheadBuffer, and adding APIs to query the last position that is being buffered in the two active playback streams. It could be that MPV uses a similar approach, since as I said, I do not see any API support for this in ffmpeg. When I implemented the buffer, I was planning to add an API to get the buffer state, with buffer size and duration for both of the current streams. But then I did not really see any real use for it, so it's not there yet. It should not be too hard to implement. Check IsFull method in StreamBuffer, all the data is eaasily available.

I also tried setting the BufferedRange on the MediaStreamSource once, hoping we would see the buffered range in the seek bar or something like that, but I did not see any effect. I just saw in the docs that it is rather used for power saving.

@softworkz
Copy link
Collaborator Author

If you use an AdaptiveMediaSource, do you get the desired behavior?

Yup, just tried it. With AdaptiveMediaSource and one of the TV streams from the link I posted, I can freely seek within the past three hours and PlaybackSession.GetSeekableRanges returns the appropriate range accordingly.

@brabebhin
Copy link
Collaborator

I guess this is a gap in our implementation or ffmpeg's. We could implement the byte stream handler, but you will lose subtitles support, unfortunately.

@softworkz
Copy link
Collaborator Author

Subtitle support (and maybe some audio codecs) is the only reason to prefer FFmpegInteropX over the plain Windows.Media implementation (because all HLS streams use codecs which are supported by Windows anyway.

Maybe it would be possible to:

  • Read the subtitle segments from the HLS demuxer to determine the available range
    • and rerport this as "SeekableRange"
    • also use this information for providing proper start and current position values
  • When a seek is performed to a place outside the buffer but within the seekable range, ffmpeg could be restarted internally without Windows.Media even known abouut it...

@brabebhin
Copy link
Collaborator

Another option is to support subtitles through the MF API,but we would lose custom styling and custom fonts.

@brabebhin
Copy link
Collaborator

Actually now that i think of it, the subtitles are just text scripts, it shouldn't be too difficult to generate the strings after we modify it to fit the custom stuff. It would be like having our ass/ssa generator.

We also need to make the current sample providers async to avoid queuing packages until a subtitle is found. That could lead to memory leaks.

@lukasf
Copy link
Member

lukasf commented Feb 22, 2024

I am not sure if the bytestream handler approach would work here. Actually bytestream handler is only for files. But there is something similar for URLs. But still, if you implement that, MF will pass you the URL and expect you to do everything. But then we are again limited by the ffmpeg hls/dash format handlers, which just does not provide any of the information needed for seekable ranges. So I don't think that this would solve any of the issues. The ffmpeg support is really only very basic, rather intended for transcoding and ripping. Any player who seriously wants to play dash/hls rolls their own stream parser, which makes starting of playback a whole lot faster and allows for seamless stream switching and stuff like that, and ofc also seekable ranges is not a problem then.

If we'd want to utilize the AdaptiveMediaSource, we'd need to go more downlevel. It might be possible to register as a mp4 demuxer codec, to let MF do the stream parsing and instead hook in more downstream on mp4 segment layer. Then we could demux the segments using FFmpeg APIs and do subtitle transcoding (any text format to ssa) and optionally also decoding of other formats.

But this seems like a whole new project with lots of work to do. I really don't have the capacities for that. I have done MF bytestream handlers in the past, it's pretty complicated stuff. Demuxer codecs are even more complicated, with multiple outputs and format negotiations on all the pins and stuff. The UWP MediaStreamSource was really a major improvement, I surely don't miss doing things down on MF level 😄

Also, I am not even sure if this approach would work. It could also be that the AdaptiveMediaSource does all the demuxing internally. In that case, the question would be if it just drops any unknown subtitle formats, or if it exposes them. In the latter case, it might be possible to register as a decoder codec and try to do subtitle transcoding from other text formats to ass.

So all in all, I see lots of work and lots of question marks...

@brabebhin
Copy link
Collaborator

I wonder how difficult this would be to implement a HLS Parser in our library.

@lukasf
Copy link
Member

lukasf commented Feb 22, 2024

Since this only really affects live streams and recorded content from live streams: Why don't you just use the AdaptiveMediaSource for HLS/DASH and FFmpegInteropX for everything else? I think live streams only use standard codecs, for audio video and subtitles. You won't find any weird formats there, so I wonder how much benefit you would have from using our lib in there.

@softworkz
Copy link
Collaborator Author

Since this only really affects live streams and recorded content from live streams: Why don't you just use the AdaptiveMediaSource for HLS/DASH and FFmpegInteropX for everything else? I think live streams only use standard codecs, for audio video and subtitles. You won't find any weird formats there, so I wonder how much benefit you would have from using our lib in there.

We're using HLS not only for TV streams. It's also our primary streaming format for everything which gets transcoded. AdaptiveMediaSource doesn't support all the subtitle codecs and converting ASS to VTT loses all the formatting for example. And we would need to burn-in all the graphic subtitles.
Also, I'm not sure about audio formats support, but I think it's not that bad as with subtitles.

Do you know whether AdaptiveMediaSource supports WebVTT subtitles?

And second question: Could FFmpegInteropX be used as a MediaSource for a subtitle stream alone?

@brabebhin
Copy link
Collaborator

brabebhin commented Feb 22, 2024

And second question: Could FFmpegInteropX be used as a MediaSource for a subtitle stream alone?

Yes and no.

When we implemented subtitles, the TimedMetadataStreamDescriptor and its associated functionality was not yet exposed in winRT.
So we implemented subtitles by exposing them in a TimedMetadataTrack. This basically works, but it means we don't fully support the MediaStreamSource contract, we bypass that with the subtitles, so this means our subtitles are not exposed as streams in IMFMediaSource underlying interface that's used by various things inside MF, including AdaptiveMediaSource, transcoding, bytestream handlers etc. As it turns out, this is quite complicated to undo, although I would like to eventually do it, maybe once the current batch of PRs is merged.

We do support parsing a TimedMetadataTrack with ffmpeg APIs, but that is currently only exposed to work with the MediaPlaybackItem that's created by the FFmpegMediaSource itself.

But it shouldn't take too long to support parsing random files and return a bare TimedMetadatTrack you can assign to random MediaPlaybackItems, including those created by an AdaptiveMediaSource.

@softworkz
Copy link
Collaborator Author

If we'd want to utilize the AdaptiveMediaSource, we'd need to go more downlevel. It might be possible to register as a mp4 demuxer codec, to let MF do the stream parsing and instead hook in more downstream on mp4 segment layer. Then we could demux the segments using FFmpeg APIs and do subtitle transcoding (any text format to ssa) and optionally also decoding of other formats.

But this seems like a whole new project with lots of work to do. I really don't have the capacities for that. I have done MF bytestream handlers in the past, it's pretty complicated stuff. Demuxer codecs are even more complicated, with multiple outputs and format negotiations on all the pins and stuff. The UWP MediaStreamSource was really a major improvement, I surely don't miss doing things down on MF level 😄

Also, I am not even sure if this approach would work. It could also be that the AdaptiveMediaSource does all the demuxing internally. In that case, the question would be if it just drops any unknown subtitle formats, or if it exposes them. In the latter case, it might be possible to register as a decoder codec and try to do subtitle transcoding from other text formats to ass.

So all in all, I see lots of work and lots of question marks...

Thanks a lot for your thoughts. I see it pretty much in the same way. We're using MPEGTS in all cases, no MP4, but this doesn't really change anything. It's a complex task, not matter at which point you start to plumb something in. Even when re-using existing partts of code, there are still many caveats which would come alone from the fact that the Windows.Media API are a different thing and even though there are some low-level extension points, it doesn't provide much flexibility there. It only provisions for very specific use cases where you need to do things exactly in the intended way - otherwise it fails.

What comes on top, is that we need a mature solution - not some kind of proof-of-concept, and there's also long way from one to the other. Just thinking about dealing with gaps, with a/v stream offsets or start-time offsets, discontinuities and the likes.
It's too much for me to take either...

But it shouldn't take too long to support parsing random files and return a bare TimedMetadatTrack you can assign to random MediaPlaybackItems, including those created by an AdaptiveMediaSource.

That sounds as it could be the way to go. The ability to serve as a provider for external subtitle streams exists already, so I can't imagine that it would be a huige task to make it possible for using together with AdaptiveMediaSource. It would be about external subtitles only - maybe WebVTT from the same master playlist if that's possible, but I hope that AdaptiveMediaSource can do VTT itself anyway. Do you know whether it can do it?

@brabebhin
Copy link
Collaborator

That sounds as it could be the way to go. The ability to serve as a provider for external subtitle streams exists already, so I can't imagine that it would be a huige task to make it possible for using together with AdaptiveMediaSource.

True, just some cppwinrt shenanigans. But I now realize we don't actually support doing this external subtitle from an URI. We only support stream based subtitles.
Again, nothing to write home about, it will just takes me a little while longer to do it.

The stream based stuff is already done, you can check branch external-subtitle-parser

@softworkz
Copy link
Collaborator Author

But I now realize we don't actually support doing this external subtitle from an URI. We only support stream based subtitles.
Again, nothing to write home about, it will just takes me a little while longer to do it.

Doesn't need any futher action. I'm using external subs from URIs ever since. It's been one of the first things I did and it's working fine:

	var uri = new Uri(url);

	var streamRef = RandomAccessStreamReference.CreateFromUri(uri);
	var ras = await streamRef.OpenReadAsync();
	
	var subtitleStreamInfo = await ffmpegMss.AddExternalSubtitleAsync(ras, stream.DeliveryUrl);

I think we even talked about it earlier.

@softworkz
Copy link
Collaborator Author

I (accidentally) found something interesting: https://github.com/SuRGeoNix/Flyleaf

It's something similar (but still different) as FFmpegInteropX and they claim:

image

Haven't looked at the code yet..

@brabebhin
Copy link
Collaborator

That seems to be using a custom ffmpeg build.

@softworkz
Copy link
Collaborator Author

Yes, but the referenced patchset is pretty minimal: https://patchwork.ffmpeg.org/project/ffmpeg/list/?series=1018

@brabebhin
Copy link
Collaborator

Yeah i noticed that, i wonder why this is not merged in ffmpeg proper already.

@lukasf
Copy link
Member

lukasf commented Feb 26, 2024

There are loads of great patches for ffmpeg which have never been merged, lost and forgotten in this stupid mailing list system. It's really a tragedy.

Flyleaf itself seems to be a pure managed implementation. Microsoft strongly advises against using any managed code inside media engines. I have seen those GC hiccups myself. Better stay 100% native from source to renderer if you want glitch-free animations/playback.

@brabebhin
Copy link
Collaborator

The question is, do we want to have a custom ffmpeg build?

@softworkz
Copy link
Collaborator Author

There are loads of great patches for ffmpeg which have never been merged, lost and forgotten in this stupid mailing list system. It's really a tragedy.

Very true and very sad indeed!

@softworkz
Copy link
Collaborator Author

Flyleaf itself seems to be a pure managed implementation. Microsoft strongly advises against using any managed code inside media engines. I have seen those GC hiccups myself. Better stay 100% native from source to renderer if you want glitch-free animations/playback.

I think those GC collection issues are rather a thing of the past with recent .net versions. Also, you can get pretty close to C languages in algorithm performance, but unly when using C# unsafe blocks with pointer arithmetic - but still only close, not equal.
The biggest gap is memory management and p/invoking to do so arkward. There's new stuff for this coming though (System.Runtime.InteropServices.NativeMemory). Haven't checked whether it's in net8.0.

But despite all those possibilities, I think it's more challenging to do it properly from C# than implementing it with the language the API was made for.

@softworkz
Copy link
Collaborator Author

The question is, do we want to have a custom ffmpeg build?

I think the first part is to understand how they are doing it. I cannot imagine that these small patches alone can do the trick.

If it turns out to be feasible in some way, then I'm pretty sure that I'll be able to get it merged into ffmpeg.

@brabebhin
Copy link
Collaborator

I think those GC collection issues are rather a thing of the past with recent .net versions. Also, you can get pretty close to C languages in algorithm performance, but unly when using C# unsafe blocks with pointer arithmetic - but still only close, not equal.
The biggest gap is memory management and p/invoking to do so arkward. There's new stuff for this coming though (System.Runtime.InteropServices.NativeMemory). Haven't checked whether it's in net8.0.

But despite all those possibilities, I think it's more challenging to do it properly from C# than implementing it with the language the API was made for.

The main advantage of using unmanaged code is that the data resides in another area on memory.
I agree with you that simply using unsafe pointers is likely going to yield C-like performance. But if and only if your entire application does that. Because if you use the usual C# objects in any area of the app, you will eventually cause a GC freeze. And one place where this easily happens in media playback is the seekbar, which will produce garbage strings around the clock.

If you could separate the managed media code from the rest of the managed app and only have the GC freeze the rest of the app, then yes, you could write a gc-freeze-free media app in C#.

@brabebhin
Copy link
Collaborator

I think the first part is to understand how they are doing it. I cannot imagine that these small patches alone can do the trick.

If it turns out to be feasible in some way, then I'm pretty sure that I'll be able to get it merged into ffmpeg.

I can clone that and have a look.
But it wouldn't be surprising for those 2 little patches to fix everything, maybe it really is that small of a deal.

@lukasf
Copy link
Member

lukasf commented Feb 27, 2024

I think those GC collection issues are rather a thing of the past with recent .net versions.

While part of the GC can happen in the background, it's still the case that the GC can halt your thread at any point while executing managed code, to perform bigger GC operations. So while you just wanted to submit the latest frame to the renderer, the GC can pause your thread, resulting in your frame being displayed one refresh interval too late. This kind of occasional glitches can only be prevented by pure native code.

I am aware that performance wise, the latest .net core versions have considerably narrowed down the performance gap to native code. Especially the whole ref-like thing (Span, etc) was a major breakthrough in performance.

@lukasf
Copy link
Member

lukasf commented Mar 3, 2024

The main issue is that ffmpeg does not set any duration for live HLS streams. Neither on AVStream nor on AVFormatContext. Also during playback, durations are never set/updated. So to properly support this with ffmpeg, a patch would be needed, to (as an option) set durations for HLS also in case of live streams (use last available segment end time), and update the duration as new segments become available.

I don't think that seeking in live HLS streams is a big issue, because it already works without issues in non-live HLS streams. Makes sense to me, that a small patch is sufficient, if needed at all.

@softworkz
Copy link
Collaborator Author

a patch would be needed, to (as an option) set durations for HLS also in case of live streams (use last available segment end time), and update the duration as new segments become available.

The last available segment doesn't have an end time, though.

What you need to do is to:

  • Treat the start of the first available segment as time 0
  • Iterate over all segments, summing up the durations => This is the total duration
  • New segments which become available continuously add to the duration
  • Old segments which are dropped from the start do not affect the duration (are not subtracted)
    • This obviously cannot be done because it would render all timing invalid
    • Instead, you just cannot seek back to that range where no segments are available anymore
    • This is where the seekableRange comes into play: The seekable range is continuously updated and indicates the earliest point in time to which you can seek back (because a segment is still available for it)

I had taken a quick look at the indicated patches and they don't appear to do anything like that...

@lukasf
Copy link
Member

lukasf commented Mar 4, 2024

Oh yeah, I forgot about the start time. There is not even a field available in ffmpeg where this could be stored. That makes it even more difficult to bring this into ffmpeg.

The patch probably only solves some minor issue when performing a seek operation into live streams. But for all the rest, I guess they manually parse the stream to get the required info. There is just nothing there in ffmpeg.

@softworkz
Copy link
Collaborator Author

Yup, it would require some extensions to the HlsDemuxer or reading/consuming the playlist in parallel.
That's why I said that this little patch they are referencing can't be the key (alone). But I haven't followed their own code yet to see what they are really doing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants