Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Why
lru-cache
v7.x is optimized for non-TTL caching: isaacs/node-lru-cache#208. Instead, it's optimized for GC minimization and does this by pre-allocating the entire data structures at time of construction: https://github.com/isaacs/node-lru-cache/blob/main/index.js#L119For our use case, we know we'll be using a short TTL so we're not too worried about memory or GC being an issue. In fact, pre-allocating all the data structures ahead of time drastically slows down our server (especially our test suite).
The old implementation is more aligned with our use case since it doesn't do any allocation ahead of time: v6.0.0 implementation: https://github.com/isaacs/node-lru-cache/blame/dcd2384fd3321d86dc6ebb3cee3e9601727d1555/index.js
Other (current) libraries were investigated as alternatives but seemed a bit heavy:
What we really need is a TTL LRU cache, and v6.0.0 of
lru-cache
provides that adequately.How
Reverse course and lock at 6.0.0, install correct typedefs.
Test Plan
Wait for CI.