-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[multiverse RPC]: add better universe root node cache #1169
Conversation
Pull Request Test Coverage Report for Build 11920129966Details
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approach ACK!
// multiverse cache. | ||
func DefaultMultiverseCacheConfig() MultiverseCacheConfig { | ||
return MultiverseCacheConfig{ | ||
ProofsPerUniverse: 5, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Temp for this commit?
We should inherit the defaults defined in the existing package here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the original value for this was way too high. Because this is the number of proofs we cache per universe (and we don't have a limit on the number of universes we cache proofs for, it's a lnutils.SyncMap
here and not an lru.Cache
), we should be quite conservative here.
Perhaps we should turn this into an lru.Cache
too, same as with the universeLeafPageCache
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok, the original intent was for that to be a instance wide value (eg: only cache 50k proofs total).
I think before merge, we should deploy a version of this on testnet/mainnet to observe the cache hit/miss rate.
8c63baa
to
b63f383
Compare
Addressed all comments and added proper unit tests. Good for full review now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looking good!
do we have a way of verifying/testing:
- new resource usage overhead
- performance difference of having a new syncer cache (an itest?)
otherwise ACK, plus some minor Qs
b63f383
to
b04ddf1
Compare
I added a test that calculates the approximate size of the entries in the cache:
With around 170k roots currently in existence on mainnet, that means the new cache would only use ~34 MBytes of memory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🦠
params := sqlc.UniverseRootsParams{ | ||
SortDirection: sqlInt16(universe.SortAscending), | ||
NumOffset: 0, | ||
NumLimit: universe.MaxPageSize, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In cases like this, I wonder if it's better to just read the entire set of roots in one go. So one single query, instead of a bunch of smaller ones. With the current amount of assets on mainnet, we'll end up doing 300 or so queries here to read them all out. Some benchmarking would likely help us find the sweet spot here re page size to read.
Non-blocking.
@@ -271,6 +475,7 @@ func (r *rootNodeCache) cacheRoots(q universe.RootNodesQuery, | |||
|
|||
rootIndex := r.rootIndex.Load() | |||
for _, rootNode := range rootNodes { | |||
rootNode := rootNode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, with the latest version of Go, we should no longer need to do this reassignment trick.
// multiverse cache. | ||
func DefaultMultiverseCacheConfig() MultiverseCacheConfig { | ||
return MultiverseCacheConfig{ | ||
ProofsPerUniverse: 5, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok, the original intent was for that to be a instance wide value (eg: only cache 50k proofs total).
I think before merge, we should deploy a version of this on testnet/mainnet to observe the cache hit/miss rate.
This commit adds a completely new cache that is tailor made for the queries the syncer makes (ascending, without any amounts by ID). This additional cache has all roots (without details), so the existing cache will only be needed for custom queries that aren't coming from the syncer. We should be able to down-size the existing cache quite a bit, as it should only serve manual requests or requests coming from an UI. We'll make the size configurable in an upcoming commit.
This commit adds a test that calculates the approximate memory consumption of a full syncer cache. The results look pretty good: === RUN TestSyncerCacheMemoryUsage multiverse_cache_test.go:214: Generated 500 roots in 26.583318ms === RUN TestSyncerCacheMemoryUsage/500_roots multiverse_cache_test.go:224: Memory usage for 500 roots: 139496 bytes multiverse_cache_test.go:226: Memory usage per root: 278 bytes multiverse_cache_test.go:228: Benchmark took 8.836µs --- PASS: TestSyncerCacheMemoryUsage/500_roots (0.01s) multiverse_cache_test.go:214: Generated 5000 roots in 257.823179ms === RUN TestSyncerCacheMemoryUsage/5000_roots multiverse_cache_test.go:224: Memory usage for 5000 roots: 1073384 bytes multiverse_cache_test.go:226: Memory usage per root: 214 bytes multiverse_cache_test.go:228: Benchmark took 104.568µs --- PASS: TestSyncerCacheMemoryUsage/5000_roots (0.01s) multiverse_cache_test.go:214: Generated 170000 roots in 8.847171795s === RUN TestSyncerCacheMemoryUsage/170000_roots multiverse_cache_test.go:224: Memory usage for 170000 roots: 34259176 bytes multiverse_cache_test.go:226: Memory usage per root: 201 bytes multiverse_cache_test.go:228: Benchmark took 1.820929ms --- PASS: TestSyncerCacheMemoryUsage/170000_roots (0.08s) --- PASS: TestSyncerCacheMemoryUsage (9.23s)
This commit allows for a much larger (32x) page size than before. This should allow us to fetch the complete set of roots even faster, given that we should have 100% cache hits for the syncer calls now. We also reduce the overall count of requests by 1 in almost all cases by detecting that the returned page isn't full anymore (instead of reading until we get an empty page).
b04ddf1
to
e8065e4
Compare
Fixes #880.
Opening as draft, mainly looking for concept ACK.
@Roasbeef the idea here is the following:
sort=ascending
andWithAmountsById=false
, which allows us to optimize the new cache and keep all root nodes in memory for those queriesWithAmountsById=true
, which allows us to keep a much smaller cacheTODOs: