A Node.js library for executing proxy methods with ease, combining local and remote caching, with usage statistics output and written in TypeScript.
- Supports ES Module only, not compatible with commonjs
- Method proxy supports AsyncFunction only
- Avoids cache breakdown; when the cache expires, it won't concurrently send requests to the backend service
- Supports background polling updates to ensure the cache is always valid and up-to-date
- Supports fallback cache, even if the remote service crashes and the cache has expired, it can still retrieve cached content
- Supports remote cache, such as Redis
- Supports caching usage statistics
In-process cache (local cache) --> Remote cache --> Fallback priority cache --> Actual call --> Fallback cache
As shown in the diagram:
npm i cache-proxy-plus
// or
pnpm add cache-proxy-plus
import { cacheProxyPlus } from 'cache-proxy-plus'
class Base {
async doBase(id) {
const res = Math.random(id)
return res
}
}
class Manager extends Base {
async doJob(id) {
const res = Math.random()
return res
}
}
const manager = new Manager()
const managerProxy = cacheProxyPlus(manager, { ttl: 2000, statsInterval: 1000 * 10 })
managerProxy.channel.on('stats', s => console.info(s))
setInterval(async () => {
const res1 = await managerProxy.doJob(1)
const res2 = await managerProxy.doJob(2)
const res3 = await managerProxy.doBase(3)
}, 1000)
When importing, you can also use cacheProxy
instead of cacheProxyPlus
.
cacheProxyPlus(target, options)
target
:
- Class object containing
AsyncFunction
options
:
exclude
: (default:[]
) Exclude methods that do not require caching proxies, string arrayttl
: (default:1000 * 60
) Expiration time of the cache in millisecondscheckPeriod
: (default:1000
) Time interval to check cache expiration in millisecondsstatsInterval
: (default:1000 * 60
) Interval for printing cache usage statistics in millisecondsrandomTtl
: (default:false
) Whether to add random timeout; calculated as:ttl * (0.8 + 0.3 * Math.random())
methodTtls
: (default:null
) Configure timeout for specific methods, as anObject
.- [
methodName
]: (default:1000 * 60
) Cache expiration time for a specific method in milliseconds
- [
subject
: (default:[target.constructor.name]
) Subject used as key prefix to differentiate keys for the same method of different targets.- Note: Cannot be a random string, or the key will be different after restarting
fallback
: (default:false
) Whether to use fallback cache; the fallback cache will save the last successfully retrieved value. When both local and asyncCache are invalid, and a real call fails, it retrieves the value from the fallback cache.fallbackTtl
: (default:1000 * 60 * 60
) Expiration time of the fallback cache in millisecondsfallbackMax
: (default:1000 * 10
) Maximum number of keys the fallback cache can store, using LRU strategyfallbackFirst
: (default:false
) Prioritize using fallback cache, asynchronously execute update, so it doesn't wait for the real update to complete.bgUpdate
: (default:false
) Whether to support background polling updates of key valuesbgUpdateDelay
: (default:100
) Waiting time between updating two keys in the backgroundbgUpdatePeriodDelay
: (default:1000 * 5
) After a round of background polling updates, how long to wait before the next round of updatesbgUpdateExpired
: (default:1000 * 60 * 60
) After how long a key has not been accessed, stop background updates for that keyconcurrency
: (default:10
) Total concurrency limit for real requests, including all methods, to avoid excessive impact on the backendremoteCache
: (default:null
) Asynchronous cache object, can implement the use of external caches such as Redis and memcached, must inherit the base classRemoteCache
Using remote cache requires inheriting the base class RemoteCache
, relevant code: RemoteCache
Below is an example using Redis as the remote cache:
import { RemoteCache } from 'cache-proxy-plus'
class RedisCache extends RemoteCache {
private redis = new Redis()
constructor() {
super()
}
async set(key: string, value: any, ttl: number) {
const wrapped = { value }
return this.redis.set(key, JSON.stringify(wrapped), 'PX', ttl + 100)
}
async get(key: string) {
const wrapped = await this.redis.get(key)
return wrapped ? JSON.parse(wrapped).value : null
}
quit() {
this.redis.disconnect()
this.redis.quit()
}
}
There are two scenarios: one is to clear all local caches, and the other is to clear the local cache for a specific method in the proxy.
// clear all local caches
managerProxy.channel.clear()
// clear local cache for a specific method
managerProxy.channel.clear(methodName)
managerProxy.channel.on('eventName', () => {})
eventName
can be: stats
, bg.stats
, bg.break.off
, error
, and expired
managerProxy.channel.on('error', err => {
// background error
})
managerProxy.channel.on('expired', (key, type) => {
// type: expired or fallbackExpired
})
stats
contains current
, total
, and methods
// Default output every minute
managerProxy.channel.on('stats', stats => {
const {
current, // Statistics for the last minute
total, // Total statistics since the process started
methods // Individual statistics for each method
} = stats
// ...
})
Details for each statistic:
{
local: 0, // Number of hits in local cache
remote: 0, // Number of hits in remote cache
update: 0, // Number of actual requests made
miss: 0, // Number of cache misses
expired: 0, // Number of local cache expirations
failed: 0, // Number of request errors
wait: 0, // Number of successful concurrent waits
failedWait: 0, // Number of errors in concurrent waits
background: 0, // Number of background updates
failedBackground: 0, // Number of errors in background updates
fallback: 0, // Number of hits in fallback cache
fallbackFirst: 0, // Number of hits prioritizing fallback cache
fallbackExpired: 0, // Number of fallback cache expirations
}
Another way to obtain statistics:
const stats = managerProxy.channel.stats()
managerProxy.channel.on('bg.stats', stats => {
// stats details
// stats.cycleTime: Time for a complete update cycle
// stats.delayTime: Pause time for a complete update cycle
// stats.updateTime: Asynchronous waiting time for a complete update cycle
// stats.updatedSize: Total number of keys updated in a complete update cycle
})
managerProxy.channel.on('bg.break.off', key => {
// ...
})
Released under the MIT License.