The "wait-until" rate-limiting strategy #205
Hvass-Labs
started this conversation in
Ideas
Replies: 2 comments
-
Thanks @Hvass-Labs for the sharing this. I too won't have the bandwidth to investigate and integrate it would take quite some effort to integrate it across all storage backends & both sync/async. Instead of just forgetting about it in an issue though I would propose that you open up a thread in Discussions. |
Beta Was this translation helpful? Give feedback.
0 replies
-
OK. As the admin, I think you can just move the entire thread to Discussions? I don't think I can do it. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
This may be a new kind of rate-limiting algorithm that perhaps you are interested in using, as it may solve some of the problems with e.g. the "sliding-window" rate-limiter.
I call it the "wait-until" rate-limiter. It only requires a single time-stamp to be updated with an atomic operation. It resembles a "leaky-bucket" algorithm as it ensures a steady flow, but it doesn't require a queue in its implementation (this should be handled by the multi-threaded OS sending the hits). It can also be made to allow bursts by simply not sleeping the CPU, and only raising an error when too many hits have accumulated.
I'm not an expert in rate-limiting algorithms, so it is possible it already exists under some strange name. I just needed an algorithm with these features, and it took me a while to get it working correctly, so I thought it might help others in the future.
It is described here:
https://github.com/Hvass-Labs/Code-Recipes/blob/main/Rate-Limiter.ipynb
Feel free to use it if you want. Please provide a link to the original implementation. Unfortunately I don't have time to make the PR for you. If you're not interested then you can just close this issue.
Beta Was this translation helpful? Give feedback.
All reactions