-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GStreamer][MSE] Removes the use of timeouts to detect data starve and last sample stages #21
Conversation
void AppendPipeline::handleEndOfAppendDataMarkNeeded() | ||
{ | ||
GstEvent* event = gst_event_new_custom(GST_EVENT_CUSTOM_DOWNSTREAM, gst_structure_new_empty("end-of-append-data-mark")); | ||
m_appendIdMarkedInSrc = guint64(gst_event_get_seqnum(event)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please simply make this member a guint32
e0d312c
to
b35e2a4
Compare
return GST_PAD_PROBE_OK; | ||
|
||
guint id = gst_event_get_seqnum(event); | ||
TRACE_MEDIA_MESSAGE("custom downstream event id=%d", id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
%u
Thx for the comments, I'll fix these tomorrow morning. There seems to be also some other typos, like "reinsterting". |
For this we use custom events. Signed-off-by: Xabier Rodriguez Calvar <calvaris@igalia.com>
They only assert if reached.
Instead of deferring to the main thread with timeouts, we always put the message through the bus as an application message. We get the message and move the state if we are in Sampling instead of Ongoing.
This method does not blindly declares the end of the append operation, it does the checks that were done in some cases before calling that method. Also, we reset to 0 the mark received at the sink when it was dealt with.
In some cases current custom events can go thru the appsrc src pad before the buffer itself, causing that the buffer is not processed. Now we send a probe for buffers in the appsrc src pad and when a buffer goes thru, we engage the process of marking the end of the append. The end of the append is done at the main thread by sending an application message thru the bus.
This renames the appsink probe functions and attributes.
b35e2a4
to
ebd0852
Compare
typefind is causing problems by swaling some buffers and not letting that information go through
In same rare cases, specially in test 30, where appends happen with not enough data, qtdemux stalls waiting for data and append never ends. This is a case that should happen very rarely so I reinstated the timeout to resolve that issue while we find and alternate and more robust solution.
According to Phil's review.
ebd0852
to
dc98ea8
Compare
class AppendPipeline : public ThreadSafeRefCounted<AppendPipeline> { | ||
public: | ||
enum AppendStage { Invalid, NotStarted, Ongoing, KeyNegotiation, DataStarve, Sampling, LastSample, Aborting }; | ||
|
||
static const unsigned int s_dataStarvedTimeoutMsec = 1000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The lower this value is (while keeping things working), the better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes and no. This case is triggered only in cases happening not very often at all so it is not a performance loss and you also told me that having these low timeouts could cause issues in Debug mode, etc. Because of these reasons I increased the value and I think 2s is ok for the cases it has to be triggered.
[GStreamer][MSE] Removes the use of timeouts to detect data starve and last sample stages
README.md: improve instructions
The timeouts were kept and raising an error and an ASSERT to be able to detect any possible regressions in the near future. If this goes were, we can remove the timeouts.
This implementation was based on an initial one by @eocanha and I corrected the pending issues.