Skip to content

Conversation

calvaris
Copy link
Member

After some investigation I got the idea that using the need-data signal on appsrc could be what we needed to remove the data starve timeout in the cases the demuxer stalls. I realized too that with that signal was telling us in every case when the demuxer had finished processing so we could really get rid of the custom events as well and rely only on that signal.

This replaces the whole end of append detection with custom downstream
events and also solves the issue of the data starve timer being needed.
Setting and reading might happen in different threads, so it is better
to protect it with a mutex.
When resetting the pipeline we also need to reset if the first buffer
went thru.
Since we don't need custom events, we don't need tracking the events anymore.
When the probe is not needed anymore, we can remove it and reinstate it
again when necessary.
{
ASSERT(WTF::isMainThread());
LOG_MEDIA_MESSAGE("resetting pipeline");
m_atLeastABufferLeftAppsrcMutex.lock();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can simply use LockHolder lock(m_atLeastABufferLeftAppsrcMutex);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I can't, the rest of the method forces to claim the mutex somewhere else and then we have a deadlock. That was the first thing I tried. I could use unlockEarly() on the MutexLocker, but I think it doesn't pay off for such straight method.

calvaris added a commit that referenced this pull request Feb 19, 2016
[GStreamer][MSE] Remove use of timeouts completely on AppendPipeline
@calvaris calvaris merged commit c2eeac9 into master Feb 19, 2016
@calvaris calvaris deleted the calvaris/timeouts branch February 19, 2016 13:52
zdobersek added a commit that referenced this pull request Jul 27, 2016
WPE: Make the backend switchable at compile- and runtime.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

2 participants