Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add other algorithms to the rate limit processor #38503

Open
romain-chanu opened this issue Mar 21, 2024 · 3 comments
Open

Add other algorithms to the rate limit processor #38503

romain-chanu opened this issue Mar 21, 2024 · 3 comments
Labels
needs_team Indicates that the issue/PR needs a Team:* label

Comments

@romain-chanu
Copy link
Contributor

Describe the enhancement:

  1. As per Rate limit processor #22883, the rate limit processor uses the token bucket algorithm.

Given the token bucket algorithm implementation, we expect the rate limit processor to conform to the configured rate over a long enough period of time, but not for every single time period, e.g. 12 events per hour every hour.

  1. There are use cases that require users to have an exact number of events per time period. Being able to configure the rate limit processor with a different algorithm (e.g sliding log rate limit algorithm) could be the solution, at the expense of a higher memory usage.

Describe a specific use case for the enhancement or feature: Add other algorithms (e.g sliding log rate limit algorithm) to the rate limit processor

@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Mar 21, 2024
@botelastic
Copy link

botelastic bot commented Mar 21, 2024

This issue doesn't have a Team:<team> label.

@StephanErb
Copy link

And ideally the rate limiter should create a field that can be reacted on in downstream processors instead of dropping the message outright. This enables additional opportunities such as dropping stack traces but keeping the message.

@ycombinator
Copy link
Contributor

And ideally the rate limiter should create a field that can be reacted on in downstream processors instead of dropping the message outright. This enables additional opportunities such as dropping stack traces but keeping the message.

@StephanErb Would you mind making a separate issue for this enhancement?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs_team Indicates that the issue/PR needs a Team:* label
Projects
None yet
Development

No branches or pull requests

3 participants