Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement per-user rate limits #251

Merged
merged 2 commits into from
Jan 31, 2017
Merged

Implement per-user rate limits #251

merged 2 commits into from
Jan 31, 2017

Conversation

juliusv
Copy link
Contributor

@juliusv juliusv commented Jan 30, 2017

This is the simplest version for now, just based on flag-configured
rate limits per distributor process and no knowledge about total
(cross-distributor rates) yet.

Fixes #128

This is the simplest version for now, just based on flag-configured
rate limits per distributor process and no knowledge about total
(cross-distributor rates) yet.

Fixes #128
@juliusv
Copy link
Contributor Author

juliusv commented Jan 30, 2017

The ingestLimiters map is technically also a memory leak here, but I expect the number of users to be relatively small and user removals to be infrequent in comparison to distributor restarts (for now). To implement a cleanup loop, we'd need to wrap the rate limiter to tell us when it was last used, because it doesn't provide a good method for that.

@@ -320,6 +338,19 @@ func (d *Distributor) Push(ctx context.Context, req *remote.WriteRequest) (*cort
return &cortex.WriteResponse{}, nil
}

func (d *Distributor) getOrCreateIngestLimiter(userID string) *rate.Limiter {
d.ingestLimitersMtx.Lock()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note I'm intentionally just going for a normal mutex here because I figured an RWLock didn't really matter in this path (access once per HTTP request).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd have probably gone with a RWMutex, I expecting each ingester to be able to handle ~1million samples/s, or 10k http requests a second. Right now they're only doing ~100 req/s, so its not a big deal. We can change later.

@rade
Copy link

rade commented Jan 30, 2017

@juliusv
Copy link
Contributor Author

juliusv commented Jan 30, 2017

@rade Ah, didn't know about Weave's implemenation. Anyways, this one here seems to be the current Go way to do it (plus the Weave one doesn't have a non-blocking method yet). Could make sense to switch the other one too.

@rade
Copy link

rade commented Jan 30, 2017

Could make sense to switch the other one too.

Indeed. That's why I mentioned it :)

@tomwilkie
Copy link
Contributor

You've pre-guessed my only comment, so LGTM!

@juliusv juliusv merged commit 631f77c into master Jan 31, 2017
@juliusv juliusv deleted the rate-limits branch January 31, 2017 10:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants