Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

occasional stutter when streaming wirelessly #10

Closed
bmegli opened this issue Feb 6, 2020 · 7 comments
Closed

occasional stutter when streaming wirelessly #10

bmegli opened this issue Feb 6, 2020 · 7 comments
Labels
bug Something isn't working

Comments

@bmegli
Copy link
Owner

bmegli commented Feb 6, 2020

This sometimes happens, sometimes not.

I don't see it when streaming locally (encoding/sending/receiving/decoding on the same machine).

This may be caused by many things, including:

  • out of order/lost packets and naive implementation of MLSP
  • bottleneck in receiver processing pipeline (simple and not threaded currently)
  • bottleneck in sender processing pipeline

The good start would be to check with ethernet instead of wireless medium.

@bmegli bmegli added the bug Something isn't working label Feb 6, 2020
@bmegli
Copy link
Owner Author

bmegli commented Mar 3, 2020

Today tests with:

  • textured depth streaming:
  • 8Mb

Resulted in:

  • networked through AP - problem happens
  • networked directly (hot spot on laptop) - no problem

Which suggests the reason is in:

  • out of order/lost packets and naive implementation of MLSP

@bmegli
Copy link
Owner Author

bmegli commented Mar 3, 2020

By the stutter I mean here streaming problem that results in "redraw" of parts or whole point cloud, corrupted data for a while

bmegli added a commit to bmegli/network-hardware-video-decoder that referenced this issue Mar 28, 2020
- in case of network (MLSP) timeout
- apart from accepting new streaming sequence (MLSP)
- flush (drain) the decoder and prepare it for new stream (HVD)

Related to #1
Possibly related to bmegli/unity-network-hardware-video-decoder#10
Possibly related to bmegli/network-hardware-video-encoder#6
@bmegli
Copy link
Owner Author

bmegli commented May 5, 2020

After some insight from implementation for duplicate packets in MLSP - this issue was not caused by duplicate packets.

@bmegli
Copy link
Owner Author

bmegli commented May 5, 2020

This is the kind of problem we mean in this issue:

Screenshot from 2020-05-05 20-38-14

By watching UDP with:

watch cat /proc/net/udp

We see that when this problem happens the drops rise.

For investigation whether this is caused by receive buffer overflow we could use:

/proc/sys/net/core/rmem_default
/proc/sys/net/core/rmem_max

@bmegli
Copy link
Owner Author

bmegli commented May 24, 2020

Overflows are responsible for almost all artifacts of this kind (with good connectivity).

A short (maybe long) term workaround is to increase OS receive buffer size, e.g.:

# here 10 x the the default value on my system
sudo sh -c "echo 2129920 > /proc/sys/net/core/rmem_max"
sudo sh -c "echo 2129920 > /proc/sys/net/core/rmem_default"

Making the buffer smaller than the original default makes the problem more evident.

In the long term the right solution is to:

  • not unproject depth under mutex (unhvd_native#1)
  • move network receiver code to separate thread (MLSP#11)
  • possibly increase network receiver thread priority

@bmegli
Copy link
Owner Author

bmegli commented May 24, 2020

Also what makes this problem more painful is the fact that by default P frames depend on previous P frames.

This means that in the worst case, losing a single P frame may mean that we will get corrupted data until next keyframe is received.

@bmegli
Copy link
Owner Author

bmegli commented Jun 11, 2020

The depth is already not unprojected under mutex.

Instead of making network receiver threaded we use OS layer and increase socket buffer size (OS side).

This has the consequence that we have to keep up with the data rate (no logical frame drops in MLSP). Long term not keeping up with the data rate would result in:

  • increasing latency
  • until socket buffer overflow
  • in a manner that is not under our control

But we have to keep up with the data rate anyway.

I am not going to work more on this now.
The solution is to increase the buffer size.

@bmegli bmegli closed this as completed Jun 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant