-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GStreamer Inter*-Elements introduce a/v-synchronity by retimestamping Buffers #58
Comments
@fightling I documented my current state of knowledge around this problem complex. Let's talk about it whenever you find the time. |
I did the first part at branch I also merged my The result are currently three errors:
|
I don't get that point. I am assuming you mean the tcp-video inputs in |
As an aside: While decklink sources are probably more reliable than TCP ones, they can still be restarted (we only expose this in the network protocol). It would be interesting to see how interpipe handles this. We could just remove the restart support though. Its main use-case is getting AV sync back, which should not happen with interpipe. @fightling |
@Florob Please can you explain to me, what source types shouldn't use |
@fightling The ticket explains the reason in the "TCP-Sources" section. |
In the sources we probably want to have a videotestsrc running and upon disconnection of the TCP Source we switch the source end of the interpipe to the videotestsrc. Similar, upon connection, we switch it back to the new tcp-source. Beause everything runs under the same clock, this should not introduce any delay or desync. |
@fightling actually we dont need the tcp sources for this congress. They are used solely for Pause- and NoStream video (which can instead be replaced with an image source) and -audio (which is not really critical). Decklink- and Image-Sources should be simpler to migrate. |
thx @MaZderMind for the hints. btw: I have added an option -g to voctocore on the feature/interpipe branch to activate gstreamer message logging to get behind the problems that occur. And it's working and offers some clue about what happens: (I'm currently using a videotestsrc to get arround the TCP source problem)
Here are the affected pipelines to which the errors relate to:
Any ideas? |
@fightling The interpipe-Elements try to do automatic caps negotiation between source and sink (see Caps Negotiation and Dynamic Switching but they can also be fixed to a well defined set of caps (see caps-Property in gst-inspect Output. As we know which caps we have both on the source- and the sink-side it might help to set the caps-property on all interpipe-elements. |
@MaZderMind I already moved all the caps we used within Gst.Caps to the interpipes. I thought this would be the best practice because negotiating the caps twice should not be necessary. You can see the caps within the interpipe elements in the pipeline drawings. |
Note: There are now |
I have successfully attached a fpsdisplaysink behind the compositor. So it seems, that the pipeline is working but maybe the connection to voctogui is the remaining problem. |
When I remove audio caps from the client side the videos are playing. |
@danimo is this solved in voctomix2? @MaZderMind shall we close this? |
@fightling As far as i read the communication, there are no inter-elements in voc2mix, so this is probably not applicable anymore. |
there are no inter*-elements anymore in voc2mix |
The inter* Elements
Voctomix internally uses gstreamers Inter* Elements (intervideosink, intervideosrc, interaudiosink and interaudiosrc) to separate different GStreamer-Pipelines, each concerned with only a partial Task. This makes the Architecture more modular and easier to extend.
But the Inter* Elements are very simple elements. *sink and *src Elements allocate a shared surface which both reference. Incoming frames (I'll say frames for now but the same applies to audio-buffers) are stored in a pointer in the surface by the *sink elements.
When a Frame is requested from the *src Element (based on the pipelines' fps), it checks if there is a buffer and if not creates a black frame. In both cases the buffer's timestamp is discarded and a new one is calculated based on the number of frames seen and the framerate.
Source and Sink pipelines run in different threads. When the source or the sink is not perfectly in sync, for example because there's a congestion around cpu resources and one thread can not be resumed in time, a Black frame is injected into the sink pipeline.
This results in two unwanted problems:
When this happens more then once, video drifts with regard to audio. The root cause of this is
The inter* elements additionally have some restrictions that are problematic to us ut we've worked around them. For example they only support a very limited set of color modes and only a single stereo audio-track.
The InterPipe* Elements
Some years ago (but quite a while after we first used voctomix), GstInterpipe appeared. At a first glance it correctly attacks all of the above mentioned problems. Furthermore it is color-mode agnostic and can handle multiple audio-tracks.
It seems that switching from the existing inter* elements would at least be worth a try.
There are two areas where I think special care is needed:
TCP-Sources
In most places we not actively use the "black frame generation"-feature of the inter* elements. Most places just start up and then generate a continous stream of video-frames. This includes the image- and the decklink-sources.
The only place we actively depend on the inter* element generating black frames is in the tcp-sources. The tcp-source-pipelines start up and terminate with the incoming tcp-connection.
The new interpipe-elements do not generate black frames on themselfs when the upstream, they instead block the downstream-pipeline until an upstream pipeline is producing frames. Therefore the TCP-Sources need to startup with a videotestsrc based pipeline until a connection is made.
When the connection arrives the testsrc-pipeline can safely be stopped and the tcp-based pipeline can be started. The interpipe-element should block the downstream pipeline in the meantime.
Multi-Audio Routing
The complete multi audio routing is currently constructed around the limitation of the inter* elements to only support one stereo stream. Every input is routed to one stereo-stream and the complete mixing is defined in stereo-streams with one interaudiosink/src pair per stereo-stream.
Although this could be replicated with the interpipe-elements and would probably also fix the a/v sync issues, a much more optimized structure could be realized with the interpipe-elements, because they intrinsicly support an unlimited number of audio-streams. In my opinion it would be okay to change the input/output channel layout but some of the features of the multi-audio routing should be preserved. This requires further discussion and planning.
Subtasks
Subtasks (might be worthwile to create actual issues for them)
The text was updated successfully, but these errors were encountered: