-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add networking req/resp request rate limiting rule #2690
Conversation
There's a duality here that we should probably nuance a bit: when you're a client, you should make no more than N concurrent requests whereas as a server, you should be expected to be able to handle / queue at least N requests concurrently.
|
Right. ... as any limit |
well, not quite: client |
2 seems low and might be problematic when trying to sync with in smaller networks. We use a cost based rate-limiter which chooses to rate limit via amount of data requested ( as compared to number of concurrent requests). If a server can serve > 2 concurrent requests, shouldn't we allow it to ? This is useful when trying to fetch a large range of blocks and we have a very small amount of peers to request from. Enforcing limits from the requester's side rather than server seems excessive here. |
Right! As @arnetheduck said |
Copying some more thoughts from the discord channel:
|
…ast nodes to serve many requests in parallel
Changed the limit from |
@Nashatyrev Thanks for leading this work, but does a MAXIMUM_CONCURRENT_REQUESTS param communicate an expectation of rate limiting requirements? I imagined some rough numbers denoted as request count per time unit. |
Request rate limit settled in stone (in spec) would strictly restrict fast nodes from serving requests faster if they are able to do so. While |
BTW just thought that with a large |
I suspect this would depend on the implementation - in libp2p, each request type is essentially a separate protocol so it's usually easier to deal with things on a per-request-type basis. I view this PR mainly as imposing a never-to-normally-be-reached limit / sanity check, so that servers can make some assumptions about honest clients so as to filter out the easiest / worst offenders. Indeed, having fixed rate limits will likely backfire - the server has all it needs to make this decision on its own based on local conditions. |
Closing this PR in favor of #3767 |
The networking spec is still missing any request rate limit rules. That leads to the situation when different clients make their own assumptions on 'reasonable' rate limits and could disconnect and downscore other clients with different assumptions.
This PR introduces rate limiting similar to RLPx, where requester must not send another request until previous one is responded. However it would be good to relax this rule and allow sending maximum
2
concurrent requests to compensate network roundtrip latency.UPD: changed the maximum from 2 to 32 to give a space for fast nodes to serve many requests in parallel
UPD: RLPx actually supports parallel requests (though without req/resp ids), then not sure how rate limiting actually works there
Below there is a bit more background from Discord channel conversation:
CCing @dapplion @arnetheduck