Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix redis TTL to match limit reset time #351

Open
zoll opened this issue Jul 27, 2022 · 1 comment
Open

Fix redis TTL to match limit reset time #351

zoll opened this issue Jul 27, 2022 · 1 comment

Comments

@zoll
Copy link

zoll commented Jul 27, 2022

Currently redis ttl is set to max value even if request is made at the end of interval. This can be confusing to API users and is misleading.

I suggest a small change to the Increment async method to calculate how much time there is left to next interval.

public async Task<RateLimitCounter> IncrementAsync(string counterId, TimeSpan interval, Func<double>? RateIncrementer = null)
        {
            if (RateIncrementer == null) throw new ArgumentNullException(nameof(RateIncrementer));
            var now = DateTime.UtcNow;
            var numberOfIntervals = now.Ticks / interval.Ticks;
            var intervalStart = new DateTime(numberOfIntervals * interval.Ticks, DateTimeKind.Utc);
            var secondsToIntervalEnd = Convert.ToInt32(interval.TotalSeconds - (now - intervalStart).TotalSeconds);
            
            _logger.LogDebug("Calling Lua script. {counterId}, {timeout}, {delta}", counterId,secondsToIntervalEnd, 1D);
            var count = await _connectionMultiplexer.GetDatabase().ScriptEvaluateAsync(_atomicIncrement, new { key = new RedisKey(counterId), timeout = secondsToIntervalEnd, delta = RateIncrementer?.Invoke() ?? 1D });
            return new RateLimitCounter
            {
                Count = (double)count,
                Timestamp = intervalStart
            };
        }
@emilbm
Copy link

emilbm commented Oct 2, 2024

UPDATE: I've created a PR for this fix (#490)

This is actually still kind of buggy. You fix the TTL in Redis, but the retry-after headers will still be wrong.

This is the basic change that allows this to be fixed:

First, adjust the LUA Script so that it returns the current TTL as well as counter.

static private readonly LuaScript _atomicIncrement = LuaScript.Prepare("local count = redis.call(\"INCRBYFLOAT\", @key, tonumber(@delta)) local ttl = redis.call(\"TTL\", @key) if ttl == -1 then redis.call(\"EXPIRE\", @key, @timeout) end return { 'count', count, 'ttl', ttl }");

Then, adjust IncrementAsync as follows:
public async Task IncrementAsync(string counterId, TimeSpan interval, Func? RateIncrementer = null)

public async Task<RateLimitCounter> IncrementAsync(string counterId, TimeSpan interval, Func<double> RateIncrementer = null)
{
    var cacheStart = DateTime.UtcNow;
    var cached = await _connectionMultiplexer.GetDatabase().ScriptEvaluateAsync(_atomicIncrement, new { key = new RedisKey(counterId), timeout = interval.TotalSeconds, delta = RateIncrementer?.Invoke() ?? 1D });
    var responseDict = cached.ToDictionary();
    var ttlSeconds = (int)responseDict["ttl"];
    if (ttlSeconds != -1)
        cacheStart = cacheStart.Add(-interval).AddSeconds(ttlSeconds); // Subtract the amount of seconds the interval adds, then add the amount of seconds still left to live.
    var count = (double)responseDict["count"];
    return new RateLimitCounter
    {
        Count = count,
        Timestamp = cacheStart
    };
}

This makes it so since we don't have an actual 'first call'-time, we can just take the current time minus the interval plus the TTL, which gives us a first cache time which is within a second of the actual time, which should suffice for the accuracy of the rate limiting.

The timeout variable for the LUA Script is only used when no entry exists, so we can just always set that to interval.TotalSeconds.

Hope that helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants