-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issues with large timeshift depths #405
Comments
Thanks for the report.
Every segment specified in the new manifest is parsed into a We'll discuss internally on what we can do here. |
Why is this not a bug? As the stream is stuttering in Chrome and not in Firefox. |
If it's stuttering in Chrome and not Firefox, then one browser is executing the code faster or with less resource contention. Firefox being faster does not equal a bug in Shaka, in my opinion. The merging code is semantically correct as far as I can tell, but we could/should make the library more efficient if we can. This is an enhancement, in my opinion. The issue is currently tagged with "enhancement", which I believe is the best label for a potential performance improvement. We have not investigated yet to determine what the cause is, but we'll try to get this fix into v2.0's final release. Since the code in question is completely new in v2 beta, it shouldn't affect v1.6.x, which is still the most stable release for now. Sorry for the confusion over labels, and we'll keep you posted as we make progress. |
Hi Joe, |
Instead of filling the URI templates when parsing the manifest, wait until the request is made to fill it. This reduces the time it takes to parse the manifest. This was tested using a stream with a 24-hour timeShiftBufferDepth. Using a Chromebook pixel running Chrome 51. The average manifest parse time was about 1 second before, now it is about 200ms. Issue #405 Change-Id: I89f36085441f6c6b7d6281b24b671dc668f23fe5
I just added a change to optimize the parser which increased performance in our tests. Can you try again from master and see if this fixes it? |
Thanks, this fix seems to have a big impact. Without the fix, I have had CPU spikes with 100 % usage for a second or two. This is eliminated when applying the fix. |
Thanks, |
Would it be possible to backport this optimization to version 1.6.x? We are using v1.6.4 and we have same performance issue with large timeshift depth. |
@janouskovec, we are not actively enhancing v1.6, but we will investigate and see how much trouble it would be. If it's not too complex, we will consider reimplementing in v1. Follow along at #449. |
@janouskovec I had to create a fix for our fork of 1.6.5. Please have a look at this commit if that can help you: Not very elegant and code-style-compliant, but I hope it could be used as a basis for your own patch. |
@torerikal, if you or @janouskovec would be interested in cleaning this patch up and submitting a PR to Let's move discussion to #449, though, so this original issue can rest in peace. :-) |
We have been testing Shaka player with live streams of different, but rather large DVR windows. We notice a significantly increased CPU load with 1-4 hour window length. With 24 hours window length, Chrome becomes unresponsive in periods.
It looks like the CPU load increases proportionally with the DVR window length. Our testing is based on some private DASH streams, and unfortunately I can't share a manifest URL.
The 24 hours DVR window length will not be subject to real use cases. Still, we are concerned about the increased CPU load for live streams with 1 to 4 hours lengths, when attempted played on medium or low end hardware, and some special use cases with other concurrent CPU load. We also look into a workaround that might exacerbate the performance problem.
Some initial CPU profiling indicates that most CPU time is spent when updating the manifest. If I were going to guess, maybe the time codes for every segment in the timeline are re-computed every time the manifest is updated? If so, do you see a workaround or strategy that could be applied here?
The text was updated successfully, but these errors were encountered: