-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
webrtc: investigate resource usage #1917
Comments
@ckousik : can we link to the benchmarks that were run? |
@ckousik is picking this up today |
Just curious what the status is here. |
I don't think it's all that different from @ddimaria 's update |
@ckousik can provide an update here? @marten-seemann in order to have output match up with other transport metrics you've collected in the past, can you point @ckousik to that output? |
What is the done criteria here? @ckousik : what did you have in mind? |
@ckousik : thanks for sharing. We're ultimately going to need @marten-seemann to weigh in. A couple of things:
|
|
Thanks @ckousik . For 3, the key thing is to enable reproducibility. This is for two fold:
|
Sorry, I had misunderstood the notebook as a DataDog notebook. The test code is present here: https://github.com/little-bear-labs/libp2p-webrtc-bench |
Looks like this was now included in the pr itself right? Maybe you can archive this repo?
Was this workaround in go-libp2p PR (couldn't find a reference to pion/webrtc#2258 in a comment) or elsewhere? Can you link to it? |
@p-shahi We manually assign stream ID's here: https://github.com/libp2p/go-libp2p/pull/1999/files#diff-f3e8c67f01e1cd4597f5d58558db1e0e28f21be14b640d8e31282eb9580476aaR310-R320. I'll add a comment linking to pion/webrtc#2258 |
Resource usage investigation is being done as part of this pr #1999 |
Status update: @GlenDC to provide Pion bottlenecks report sometime in the next two weeks |
We have 1 fix in: pion/mdns#172
|
Tracking issue for 1. pion/webrtc#2672 |
For
The PR in webrtc decreases memory usage from 31MB to 17MB when running 10 connections and echoing 1MB over 100 different streams(1GB total data transferred). The rest of the memory use are fix sized allocations that'd take more effort to reduce (1 MB buffers in sctp and 1MB buffers for read from ice connections) the benchmarks are in branch webrtc-bm-2. |
Setup: two ec2 instances. c5.xlarge(4 cores 8GB Ram). us-east-1 and us-west-2. Ping time is 60ms. BDP is 40MB assuming 5Gbps link.
|
I quite like the idea of 100kb receive buf. The performance seems acceptable for now and the peer can enqueue 10x less on the sctp layer. |
Does this mean we can limit CPU usage by using |
Applications can limit their throughput if they want to limit CPU usage. For now we aren't going to expose |
The source code for @sukunrt's test is here: https://github.com/libp2p/go-libp2p/tree/webrtc-echobm/examples/echobm
I agree that this one seems like a good option. It is fast enough while using little CPU. If use cases appear that need to optimize past 12 Mb/s on WebRTC connections we can expose an option to tune the recv buffer and the maxBufferedAmount. But defaulting to conservative resource usage seems better to me. |
I think we can close this issue as @sukunrt's report and code is enough. |
@SgtPooki the better way to do that is to limit the sctp receive buffer to 100kB as this affects all streams on a connection. Apologies for the poor naming, I used the term we are using in code. The |
this should be done through the resource mamager, dont hardcode values please. |
That's a much larger change since the underlying sctp connection's receive buffer is a fix sized buffer and doesn't support window updates. |
@ckousik reports that running WebRTC is expensive, and a significant amount of CPU can be spent on handling a comparatively small number of WebRTC connections.
We should investigate how expensive it actually is. Our current DoS mitigation assumes that handling connections doesn't come with a large overhead in terms of CPU, and that the main cost is imposed by handling streams, in particular by the memory consumed on streams. This might not hold true for WebRTC.
If that's the case, we'll need to impose strict limits on the number of concurrent WebRTC connections that a node is willing to handle.
The text was updated successfully, but these errors were encountered: