Fixes expiry calculation of cache keys #560
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi!
Big PR. But not to worry. It's not that complicated.
Aims to fix #480 and addresses #380.
The Initial goal was to try to fix the throttle limit reaching up to 2n. See detailed explanation from @peter-roland-toth. But the development end up with dropping proxies in favor of adapters, because this whole thing makes real sense in current form.
Hence, touching 2 problems in one PR.
This is a different design approach. And just like every design it's a trade-off. In this case accuracy and performance vs complexity.
I think the best way to explore this is walk through the concerns that I had implementing it.
From the top-level perspective, RackAttack calculates the TTL of the cache key at the moment based on a given period and instructs the underlying cache server to update the expiry of that key on every increment attempt.
But why bother trying do the job those servers are designed for? What if to set the expiration first time this key is created and let the server expire it.
The application I'm trying to solve this problem for is using Redis. Something like this should do the job.
More extreme conditions.
Working. Next step is to integrate it with RackAttack in Rails environment.
Obviously I can't call
incr
orexpire
onActiveSupport::Cache::RedisCacheStore
, but definitely can take Redis object fromRedisCacheStore
and use those methods.There were a few more places in the application where this
increment
was needed. Plus from RackAttack the app used only throttle rules. Couple of more tunings to work in development, whenRails.cache
is not using Redis, and here we have it. This dead simple interface.This setup has been successfully working in production for several months.
OK, the pattern is clear. But this is a simple, opinionated case, dropping
RedisCacheStore
can have consequences for others.Well, let's see what it does using
ActiveSupport 6.1
as an example.https://github.com/rails/rails/blob/v6.1.4.4/activesupport/lib/active_support/cache/redis_cache_store.rb#L173-L181
raw
istrue
, and RackAttack use it withraw: true
There is more:
The error_handler: this could be a good candidate as RackAttack feature.
With this said, let's assume
RedisCacheStore
, while being a usefull tool in general, has little value for RackAttack and move to performance area.Again, exploring
RedisCacheStore
.This is the
increment
of currentRedisCacheStoreProxy
What the super does? (significant part)
Finally the
write_key_expiry
Notice how the execution path goes
read
-incrby
-ttl
during the entire lifetime of this key, except the first create request.But
read
andttl
are not necessary.Benchmarks
This is
Rack::Attack::StoreProxy::ActiveSupportRedisStoreProxy
as proxy vsRack::Attack::Adapters::RedisAdapter
as adapter. See the benchmark script here.Results varies from
to
ActiveSupportRedisStoreProxy
implementation is at least 4 times slower thanRedisAdapter
.In my understanding
RedisCacheStore
is not suitable for use in RackAttack. And all other wrappers on cache server clients for that matter. They designed as general purpose tools. Having another layer over them will only increase code execution paths.But I'd like to hear other opinions.
Memcached
Things are much simpler here.
Turns out Memcached completely ignores the TTL during subsequent
incr
calls. Works smoothly with this new approach.The Stucture
All adapters implement
read
,write
,increment
,delete
methods. Store handlers decide which adapter to initiate.There is one special case.
ActiveSupport::Cache::MemoryStore#increment
doesn't actually increment existing cache value.It reads and writes new cache entry with incremented value and given TTL. Which doesn't work with this approach.
The
ActiveSupportMemoryStoreAdapter
implementation assumes that no one use MemoryStore in production.