-
-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a simple throttler/limiter #91
Comments
Would be interested to implement this. And just to clarify, this is a simple reverse proxy, so distributed rate-limiting is not something we're considering right now, is that right? This can be done using something like a UDP broadcast to share the state between proxy hosts, but again the per-client state is going to be a problem (data size, serialization/deserialization cost, etc.) |
I thought about leveraging https://github.com/didip/tollbooth which is a relatively thin wrapper on top of x/time/rate, decently tested and battle proved.
as far as I recall tollbooth does all of this
yep |
Yes, you're right, seems like it's a wrapper with additional bells and whistles over x/time/rate. |
@fkirill I had this basic implementation for a while, just never pushed. Pls take a look I have tried to keep it as simple as possible from the user's point of view. No fancy params, just a couple of limiter values. Internally it uses tollbooth limiter with LimitByKeys. Also borrowed your idea of avoiding repeated code for passThroughHandler |
No worries, thanks! |
Hey Umputun, a quick question. I was reading the code you mentioned and found some hints around limiting concurrent requests. So should throttling limit concurrent in-flight requests or admitted requests per second? It's not a mutually exclusive choice. My recommendation is to start with throughput limiting (option (2) below). Here are some pros and cons for both approaches. A little bit of theory. There are two schools of thought around throttling:
|
this is a good question. initially, I planned to limit in-flight requests, at least on the global (server) level and even added such a milldeware. However, for per-client limiters, this feels very unusual and unexpected and I have opted for the more traditional req/sec. Providing two distinct throttlers will be very confusing for the user, so I have switched both global and client-level to req/seq via tollbooth I don't see how the number of concurrent req/seq makes much sense except for the overall limiter and, maybe, on the overall destination server level. I mean, limiting in-flight for each client seems almost useless and as a user, I wouldn't even know what value I want to set here. So, I'm planning to keep req/seq only for now as this one seems to be a universally accepted idea and the user could limit the server's load indirectly. We may consider adding concurrent limiters if needed, but I would prefer to keep it simple and not confuse users with two different methods of throttling. I would appreciate if you can review the PR. To me it seems to be almost trivial and, from reading the sources of tollboth, |
Hey Umputun, I approved with a couple of (nuanced) comments. |
This can be useful, and won't add much complexity. The throttler should be able to limit the total number of concurrent requests per reproxy server as well as per proxy destination. Not sure if limiting per destination server (virtual host) needed.
It also should be able to limit req/sec per remote client for the same areas (total and destination).
The text was updated successfully, but these errors were encountered: