-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(store): Parallel write in the CacheMultiStore
#20817
base: main
Are you sure you want to change the base?
Conversation
📝 Walkthrough📝 WalkthroughWalkthroughThe recent changes introduce parallel writes in the Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant User
participant TestCacheMultiStoreWrite
participant CacheMultiStore
participant Store
participant goroutine
User->>+TestCacheMultiStoreWrite: Run Test
TestCacheMultiStoreWrite->>+CacheMultiStore: Setup Stores
CacheMultiStore->>+Store: Write
Store->>+goroutine: Write
goroutine->>Store: Complete Write
goroutine->>+CacheMultiStore: Return result
CacheMultiStore->>+TestCacheMultiStoreWrite: Verify results
TestCacheMultiStoreWrite->>User: Test Result
Assessment against linked issues
This assessment confirms that the code changes effectively meet the objectives outlined in the linked issues, particularly addressing the need for parallel execution to enhance performance during writes. Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
This comment has been minimized.
This comment has been minimized.
store/cachemulti/store.go
Outdated
for _, store := range cms.stores { | ||
store.Write() | ||
go func(s types.CacheWrap) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
most of the time, the dirty writes are small or even empty, does it justify the overhead of goroutine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, it would be helpful if there were only two meaningful writes (two stores), since WorkingHash requires the iavl tree re-building and root hash (one of bottlenecks)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should get some benchmarks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Outside diff range and nitpick comments (1)
store/CHANGELOG.md (1)
Line range hint
65-65
: Consider adding a missing comma for clarity.There seems to be a missing comma after "go.mod file" which can improve readability.
- which allows it be a standalone module + which allows it to be a standalone module,
Review details
Configuration used: .coderabbit.yml
Review profile: CHILL
Files selected for processing (3)
- store/CHANGELOG.md (1 hunks)
- store/cachemulti/store.go (2 hunks)
- store/rootmulti/store_test.go (1 hunks)
Additional context used
Path-based instructions (3)
store/CHANGELOG.md (1)
Pattern
**/*.md
: "Assess the documentation for misspellings, grammatical errors, missing documentation and correctness"store/cachemulti/store.go (1)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.store/rootmulti/store_test.go (2)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.
Pattern
**/*_test.go
: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"
LanguageTool
store/CHANGELOG.md
[uncategorized] ~62-~62: You might be missing the article “a” here.
Context: ...b.com//pull/14645) Add limit to the length of key and value. * [#156...(AI_EN_LECTOR_MISSING_DETERMINER_A)
[uncategorized] ~65-~65: Possible missing comma found.
Context: ... is extracted to have a separate go.mod file which allows it be a standalone module....(AI_HYDRA_LEO_MISSING_COMMA)
[grammar] ~65-~65: The preposition ‘to’ may be missing (allow someone to do something).
Context: ... a separate go.mod file which allows it be a standalone module. * [#14410](https:/...(ALLOW_TO_DO)
[uncategorized] ~66-~66: You might be missing the article “an” here.
Context: ..., it will error if any module store has incorrect height. ### Improvements * [#17158](h...(AI_EN_LECTOR_MISSING_DETERMINER_AN)
Markdownlint
store/CHANGELOG.md
58-58: Expected: 2; Actual: 4
Unordered list indentation(MD007, ul-indent)
59-59: Expected: 2; Actual: 4
Unordered list indentation(MD007, ul-indent)
60-60: Expected: 2; Actual: 4
Unordered list indentation(MD007, ul-indent)
44-44: Expected: 0 or 2; Actual: 1
Trailing spaces(MD009, no-trailing-spaces)
Additional comments not posted (3)
store/CHANGELOG.md (1)
37-37
: Changelog entry is clear and well-formatted.The entry for PR #20817 is correctly placed under the "Improvements" section and provides a direct link to the PR, which helps in traceability.
store/cachemulti/store.go (1)
126-134
: Ensure proper handling of goroutines for parallel writes.The implementation uses goroutines correctly with a
sync.WaitGroup
to ensure all writes complete before proceeding. However, ensure that thestore.Write()
method is safe to call concurrently on differentstore
instances.store/rootmulti/store_test.go (1)
1036-1071
: Comprehensive test for parallel writes inCacheMultiStore
.The test effectively sets up a scenario to validate the parallel writing capability by setting keys in multiple stores and committing them. It checks data consistency after writes, which is crucial for ensuring that the parallel write feature does not introduce errors.
store/rootmulti/store_test.go
Outdated
@@ -116,7 +116,7 @@ func TestCacheMultiStoreWithVersion(t *testing.T) { | |||
// require we cannot commit (write) to a cache-versioned multi-store | |||
require.Panics(t, func() { | |||
kvStore.Set(k, []byte("newValue")) | |||
cms.Write() | |||
kvStore.(types.CacheWrap).Write() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, I realized recovering panics from multi goroutines is tricky, @alpe have you any idea?
the above update is same check with original one, but curious
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume, you want to test for cms.Write()
to panic. Your workaround with kvStore.(types.CacheWrap).Write()
is pragmatic but with a cms.Write()
you give away control from the main thread. Therefore the caller can not handle panics which is the bigger problem.
Another issue would be deterministic behaviour when more than one store panics.
You could recover in the go routines, collect and sort the errors. Maybe it is worth to refactor to error return values instead of panics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Review details
Configuration used: .coderabbit.yml
Review profile: CHILL
Files selected for processing (1)
- store/rootmulti/store_test.go (2 hunks)
Files skipped from review as they are similar to previous changes (1)
- store/rootmulti/store_test.go
benchmark results
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Review details
Configuration used: .coderabbit.yml
Review profile: CHILL
Files selected for processing (1)
- store/cachemulti/benchmark_test.go (1 hunks)
Additional context used
Path-based instructions (1)
store/cachemulti/benchmark_test.go (2)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.
Pattern
**/*_test.go
: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"
Additional comments not posted (1)
store/cachemulti/benchmark_test.go (1)
51-61
: LGTM!The benchmark function follows best practices and is well-structured.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Outside diff range, codebase verification and nitpick comments (2)
store/cachemulti/store.go (1)
126-140
: Review the parallel execution logic in theWrite
method.The implementation uses
errgroup.Group
for managing parallel writes, which is a robust choice. However, ensure that the error handling and panic recovery mechanisms are correctly implemented to prevent data corruption and ensure stability.Additionally, consider logging the error before panicking to aid in debugging in production environments.
- panic(err) + log.Errorf("error during parallel write: %v", err) + panic(err)store/rootmulti/store_test.go (1)
1036-1071
: Assess the new test function for parallel writes.The test function
TestCacheMultiStoreWrite
is well-structured and covers the functionality of parallel writes extensively. It checks for data consistency after writes, which is crucial for validating the correctness of the implementation.Consider adding more edge cases, such as concurrent writes to the same keys, to further ensure robustness.
Review details
Configuration used: .coderabbit.yml
Review profile: CHILL
Files selected for processing (4)
- store/cachemulti/benchmark_test.go (1 hunks)
- store/cachemulti/store.go (2 hunks)
- store/go.mod (1 hunks)
- store/rootmulti/store_test.go (1 hunks)
Additional context used
Path-based instructions (3)
store/cachemulti/benchmark_test.go (2)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.
Pattern
**/*_test.go
: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"store/cachemulti/store.go (1)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.store/rootmulti/store_test.go (2)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.
Pattern
**/*_test.go
: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"
Additional comments not posted (1)
store/go.mod (1)
25-25
: Validate the addition of thegolang.org/x/sync
module.The addition of
golang.org/x/sync
at versionv0.7.0
is crucial for the parallel write feature introduced in this PR. Ensure that this version is compatible with other dependencies and does not introduce any conflicts.
store/cachemulti/benchmark_test.go
Outdated
func setupStore(b *testing.B, storeCount uint) (Store, map[string]types.StoreKey) { | ||
b.Helper() | ||
|
||
db := dbm.NewMemDB() | ||
storeKeys := make(map[string]types.StoreKey) | ||
stores := make(map[types.StoreKey]types.CacheWrapper) | ||
for i := uint(0); i < storeCount; i++ { | ||
key := types.NewKVStoreKey(fmt.Sprintf("store%d", i)) | ||
storeKeys[key.Name()] = key | ||
sdb := dbm.NewPrefixDB(db, []byte(key.Name())) | ||
istore, err := iavl.LoadStore(sdb, log.NewNopLogger(), key, types.CommitID{}, 1000, false, nil) | ||
require.NoError(b, err) | ||
stores[key] = types.KVStore(istore) | ||
} | ||
|
||
return NewStore(db, stores, storeKeys, nil, types.TraceContext{}), storeKeys |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider preallocating maps for performance optimization.
Given the known size of storeCount
, preallocating the storeKeys
and stores
maps can improve performance by reducing the need for dynamic memory allocation.
- storeKeys := make(map[string]types.StoreKey)
- stores := make(map[types.StoreKey]types.CacheWrapper)
+ storeKeys := make(map[string]types.StoreKey, storeCount)
+ stores := make(map[types.StoreKey]types.CacheWrapper, storeCount)
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func setupStore(b *testing.B, storeCount uint) (Store, map[string]types.StoreKey) { | |
b.Helper() | |
db := dbm.NewMemDB() | |
storeKeys := make(map[string]types.StoreKey) | |
stores := make(map[types.StoreKey]types.CacheWrapper) | |
for i := uint(0); i < storeCount; i++ { | |
key := types.NewKVStoreKey(fmt.Sprintf("store%d", i)) | |
storeKeys[key.Name()] = key | |
sdb := dbm.NewPrefixDB(db, []byte(key.Name())) | |
istore, err := iavl.LoadStore(sdb, log.NewNopLogger(), key, types.CommitID{}, 1000, false, nil) | |
require.NoError(b, err) | |
stores[key] = types.KVStore(istore) | |
} | |
return NewStore(db, stores, storeKeys, nil, types.TraceContext{}), storeKeys | |
func setupStore(b *testing.B, storeCount uint) (Store, map[string]types.StoreKey) { | |
b.Helper() | |
db := dbm.NewMemDB() | |
storeKeys := make(map[string]types.StoreKey, storeCount) | |
stores := make(map[types.StoreKey]types.CacheWrapper, storeCount) | |
for i := uint(0); i < storeCount; i++ { | |
key := types.NewKVStoreKey(fmt.Sprintf("store%d", i)) | |
storeKeys[key.Name()] = key | |
sdb := dbm.NewPrefixDB(db, []byte(key.Name())) | |
istore, err := iavl.LoadStore(sdb, log.NewNopLogger(), key, types.CommitID{}, 1000, false, nil) | |
require.NoError(b, err) | |
stores[key] = types.KVStore(istore) | |
} | |
return NewStore(db, stores, storeKeys, nil, types.TraceContext{}), storeKeys |
store/cachemulti/benchmark_test.go
Outdated
func setupStore(b *testing.B, storeCount uint) (Store, map[string]types.StoreKey) { | ||
b.Helper() | ||
|
||
db := dbm.NewMemDB() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Testing against a LSM db would be more illustrative of performance improvements, let's try leveldb and pebbledb?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is not for i/o, mostly for tree updates in memory, in WorkingHash liefcycle
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice updates.
store/cachemulti/store.go
Outdated
return nil | ||
}) | ||
} | ||
if err := eg.Wait(); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if the panic is handled somewhere else in case: this is not deterministic for multiple errors.
In the godoc for Wait:
returns the first non-nil error (if any) from them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we don't need to care about the multiple errors, the Write is the memory action and it won't affect the finalized state
store/cachemulti/benchmark_test.go
Outdated
|
||
for _, storeCount := range storeCounts { | ||
for _, keyCount := range keyCounts { | ||
b.Run(fmt.Sprintf("storeCount=%d/keyCount=%d", storeCount, keyCount), func(sub *testing.B) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One argument was that there will be a number of untouched stores where the Go routine may add significant overhead. It would be good to add that to your test for a fair comparison.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it can be covered by the storeCount
, for example if there are 2 significant writes among 10 stores, then it will be saved by 37% from the result. right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Outside diff range, codebase verification and nitpick comments (3)
store/rootmulti/store_test.go (3)
1036-1039
: Ensure proper cleanup of resources.It's a good practice to close the database connection after the test to avoid resource leaks.
db, err := dbm.NewDB("test", dbm.GoLevelDBBackend, t.TempDir()) require.NoError(t, err) + defer db.Close()
1041-1042
: Add a comment to explain the purpose ofpruningtypes.PruningNothing
.Adding a comment will help future maintainers understand why this specific pruning option is used.
ms := newMultiStoreWithMounts(db, pruningtypes.NewPruningOptions(pruningtypes.PruningNothing)) + // Using PruningNothing to ensure all versions are kept for the test require.NoError(t, ms.LoadLatestVersion())
1046-1048
: Consider makingtoVersion
andkeyCount
constants.Defining these values as constants improves readability and maintainability.
+ const ( + toVersion = 100 + keyCount = 100 + ) toVersion := int64(100) keyCount := 100 storeKeys := []types.StoreKey{testStoreKey1, testStoreKey2, testStoreKey3}
Review details
Configuration used: .coderabbit.yml
Review profile: CHILL
Files selected for processing (2)
- store/go.mod (1 hunks)
- store/rootmulti/store_test.go (1 hunks)
Files skipped from review as they are similar to previous changes (1)
- store/go.mod
Additional context used
Path-based instructions (1)
store/rootmulti/store_test.go (2)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.
Pattern
**/*_test.go
: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"
// check the data | ||
for _, storeKey := range storeKeys { | ||
store := cacheMulti.GetKVStore(storeKey) | ||
for i := int64(1); i <= toVersion; i++ { | ||
for j := 0; j < keyCount; j++ { | ||
key := []byte(fmt.Sprintf("key-%d-%d", i, j)) | ||
value := store.Get(key) | ||
require.Equal(t, []byte(fmt.Sprintf("value-%d-%d", i, j)), value) | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhance readability by extracting the verification loop into a helper function.
This will make the test function more concise and easier to understand.
+ func verifyStoreData(t *testing.T, cacheMulti types.CacheMultiStore, storeKeys []types.StoreKey, toVersion int64, keyCount int) {
+ for _, storeKey := range storeKeys {
+ store := cacheMulti.GetKVStore(storeKey)
+ for i := int64(1); i <= toVersion; i++ {
+ for j := 0; j < keyCount; j++ {
+ key := []byte(fmt.Sprintf("key-%d-%d", i, j))
+ value := store.Get(key)
+ require.Equal(t, []byte(fmt.Sprintf("value-%d-%d", i, j)), value)
+ }
+ }
+ }
+ }
for _, storeKey := range storeKeys {
store := cacheMulti.GetKVStore(storeKey)
for i := int64(1); i <= toVersion; i++ {
for j := 0; j < keyCount; j++ {
key := []byte(fmt.Sprintf("key-%d-%d", i, j))
value := store.Get(key)
require.Equal(t, []byte(fmt.Sprintf("value-%d-%d", i, j)), value)
}
}
}
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// check the data | |
for _, storeKey := range storeKeys { | |
store := cacheMulti.GetKVStore(storeKey) | |
for i := int64(1); i <= toVersion; i++ { | |
for j := 0; j < keyCount; j++ { | |
key := []byte(fmt.Sprintf("key-%d-%d", i, j)) | |
value := store.Get(key) | |
require.Equal(t, []byte(fmt.Sprintf("value-%d-%d", i, j)), value) | |
} | |
} | |
} | |
func verifyStoreData(t *testing.T, cacheMulti types.CacheMultiStore, storeKeys []types.StoreKey, toVersion int64, keyCount int) { | |
for _, storeKey := range storeKeys { | |
store := cacheMulti.GetKVStore(storeKey) | |
for i := int64(1); i <= toVersion; i++ { | |
for j := 0; j < keyCount; j++ { | |
key := []byte(fmt.Sprintf("key-%d-%d", i, j)) | |
value := store.Get(key) | |
require.Equal(t, []byte(fmt.Sprintf("value-%d-%d", i, j)), value) | |
} | |
} | |
} | |
} | |
// check the data | |
verifyStoreData(t, cacheMulti, storeKeys, toVersion, keyCount) |
for i := int64(1); i <= toVersion; i++ { | ||
for _, storeKey := range storeKeys { | ||
store := cacheMulti.GetKVStore(storeKey) | ||
for j := 0; j < keyCount; j++ { | ||
store.Set([]byte(fmt.Sprintf("key-%d-%d", i, j)), []byte(fmt.Sprintf("value-%d-%d", i, j))) | ||
} | ||
} | ||
cacheMulti.Write() | ||
ms.Commit() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhance readability by extracting the nested loops into a helper function.
This will make the test function more concise and easier to understand.
+ func populateStore(t *testing.T, cacheMulti types.CacheMultiStore, storeKeys []types.StoreKey, toVersion int64, keyCount int) {
+ for i := int64(1); i <= toVersion; i++ {
+ for _, storeKey := range storeKeys {
+ store := cacheMulti.GetKVStore(storeKey)
+ for j := 0; j < keyCount; j++ {
+ store.Set([]byte(fmt.Sprintf("key-%d-%d", i, j)), []byte(fmt.Sprintf("value-%d-%d", i, j)))
+ }
+ }
+ cacheMulti.Write()
+ ms.Commit()
+ }
+ }
for i := int64(1); i <= toVersion; i++ {
for _, storeKey := range storeKeys {
store := cacheMulti.GetKVStore(storeKey)
for j := 0; j < keyCount; j++ {
store.Set([]byte(fmt.Sprintf("key-%d-%d", i, j)), []byte(fmt.Sprintf("value-%d-%d", i, j)))
}
}
cacheMulti.Write()
ms.Commit()
}
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for i := int64(1); i <= toVersion; i++ { | |
for _, storeKey := range storeKeys { | |
store := cacheMulti.GetKVStore(storeKey) | |
for j := 0; j < keyCount; j++ { | |
store.Set([]byte(fmt.Sprintf("key-%d-%d", i, j)), []byte(fmt.Sprintf("value-%d-%d", i, j))) | |
} | |
} | |
cacheMulti.Write() | |
ms.Commit() | |
} | |
func populateStore(t *testing.T, cacheMulti types.CacheMultiStore, storeKeys []types.StoreKey, toVersion int64, keyCount int) { | |
for i := int64(1); i <= toVersion; i++ { | |
for _, storeKey := range storeKeys { | |
store := cacheMulti.GetKVStore(storeKey) | |
for j := 0; j < keyCount; j++ { | |
store.Set([]byte(fmt.Sprintf("key-%d-%d", i, j)), []byte(fmt.Sprintf("value-%d-%d", i, j))) | |
} | |
} | |
cacheMulti.Write() | |
ms.Commit() | |
} | |
} | |
for i := int64(1); i <= toVersion; i++ { | |
for _, storeKey := storeKeys { | |
store := cacheMulti.GetKVStore(storeKey) | |
for j := 0; j < keyCount; j++ { | |
store.Set([]byte(fmt.Sprintf("key-%d-%d", i, j)), []byte(fmt.Sprintf("value-%d-%d", i, j))) | |
} | |
} | |
cacheMulti.Write() | |
ms.Commit() | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Review details
Configuration used: .coderabbit.yml
Review profile: CHILL
Files selected for processing (1)
- store/cachemulti/benchmark_test.go (1 hunks)
Additional context used
Path-based instructions (1)
store/cachemulti/benchmark_test.go (2)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.
Pattern
**/*_test.go
: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"
Additional comments not posted (1)
store/cachemulti/benchmark_test.go (1)
54-65
: LGTM!The
BenchmarkCacheMultiStore
function is well-structured and effectively tests different configurations.
storeKeys := make(map[string]types.StoreKey) | ||
stores := make(map[types.StoreKey]types.CacheWrapper) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preallocate maps to improve performance.
Preallocating the storeKeys
and stores
maps with the known size storeCount
can improve performance.
- storeKeys := make(map[string]types.StoreKey)
- stores := make(map[types.StoreKey]types.CacheWrapper)
+ storeKeys := make(map[string]types.StoreKey, storeCount)
+ stores := make(map[types.StoreKey]types.CacheWrapper, storeCount)
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
storeKeys := make(map[string]types.StoreKey) | |
stores := make(map[types.StoreKey]types.CacheWrapper) | |
storeKeys := make(map[string]types.StoreKey, storeCount) | |
stores := make(map[types.StoreKey]types.CacheWrapper, storeCount) |
store/cachemulti/benchmark_test.go
Outdated
for i := 0; i < b.N; i++ { | ||
b.StopTimer() | ||
for _, key := range storeKeys { | ||
cstore := store.GetKVStore(key) | ||
for j := uint(0); j < keyCount; j++ { | ||
dataKey := fmt.Sprintf("key%s-%d", key.Name(), j) | ||
dataValue := fmt.Sprintf("value%s-%d", key.Name(), j) | ||
cstore.Set([]byte(dataKey), []byte(dataValue)) | ||
} | ||
} | ||
b.StartTimer() | ||
store.Write() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve readability and performance of timer management in benchmarks.
Moving the StopTimer
and StartTimer
calls outside the inner loop can help in accurately measuring the time taken for store.Write()
without the overhead of setup operations.
- for i := 0; i < b.N; i++ {
- b.StopTimer()
- for _, key := range storeKeys {
- cstore := store.GetKVStore(key)
- for j := uint(0); j < keyCount; j++ {
- dataKey := fmt.Sprintf("key%s-%d", key.Name(), j)
- dataValue := fmt.Sprintf("value%s-%d", key.Name(), j)
- cstore.Set([]byte(dataKey), []byte(dataValue))
- }
- }
- b.StartTimer()
- store.Write()
- }
+ for i := 0; i < b.N; i++ {
+ for _, key := range storeKeys {
+ cstore := store.GetKVStore(key)
+ b.StopTimer()
+ for j := uint(0); j < keyCount; j++ {
+ dataKey := fmt.Sprintf("key%s-%d", key.Name(), j)
+ dataValue := fmt.Sprintf("value%s-%d", key.Name(), j)
+ cstore.Set([]byte(dataKey), []byte(dataValue))
+ }
+ b.StartTimer()
+ }
+ store.Write()
+ }
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for i := 0; i < b.N; i++ { | |
b.StopTimer() | |
for _, key := range storeKeys { | |
cstore := store.GetKVStore(key) | |
for j := uint(0); j < keyCount; j++ { | |
dataKey := fmt.Sprintf("key%s-%d", key.Name(), j) | |
dataValue := fmt.Sprintf("value%s-%d", key.Name(), j) | |
cstore.Set([]byte(dataKey), []byte(dataValue)) | |
} | |
} | |
b.StartTimer() | |
store.Write() | |
for i := 0; i < b.N; i++ { | |
for _, key := range storeKeys { | |
cstore := store.GetKVStore(key) | |
b.StopTimer() | |
for j := uint(0); j < keyCount; j++ { | |
dataKey := fmt.Sprintf("key%s-%d", key.Name(), j) | |
dataValue := fmt.Sprintf("value%s-%d", key.Name(), j) | |
cstore.Set([]byte(dataKey), []byte(dataValue)) | |
} | |
b.StartTimer() | |
} | |
store.Write() | |
} |
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@kocubinski are you able to review this? |
store/cachemulti/store.go
Outdated
@@ -122,8 +123,21 @@ func (cms Store) GetStoreType() types.StoreType { | |||
// Write calls Write on each underlying store. | |||
func (cms Store) Write() { | |||
cms.db.Write() | |||
eg := new(errgroup.Group) | |||
for _, store := range cms.stores { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should there be a concurrency limit? or is it good enough to delegate this work completely to the go scheduler?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should do some benchmarks with databases and come up with a optimal number of runners
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as we can see in #20817 (comment), from 8 stores, there is not much improvement, let me benchmark with limited number of runners
There is not much difference between 4 and 8, 4 is preferred considering the other processes.
|
for storeKey, store := range cms.stores { | ||
wg.Add(1) | ||
sem <- struct{}{} | ||
|
||
go func() { | ||
defer func() { | ||
wg.Done() | ||
<-sem // Release the slot | ||
|
||
if r := recover(); r != nil { | ||
errChan <- fmt.Errorf("panic in Write for store %s: %v", storeKey.Name(), r) | ||
} | ||
}() | ||
store.Write() | ||
}() | ||
} |
Check warning
Code scanning / CodeQL
Iteration over map Warning
go func() { | ||
defer func() { | ||
wg.Done() | ||
<-sem // Release the slot | ||
|
||
if r := recover(); r != nil { | ||
errChan <- fmt.Errorf("panic in Write for store %s: %v", storeKey.Name(), r) | ||
} | ||
}() | ||
store.Write() | ||
}() |
Check notice
Code scanning / CodeQL
Spawning a Go routine Note
go func() { | ||
wg.Wait() | ||
close(errChan) | ||
}() |
Check notice
Code scanning / CodeQL
Spawning a Go routine Note
Description
Closes: #20787
Author Checklist
All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.
I have...
!
in the type prefix if API or client breaking changeCHANGELOG.md
Reviewers Checklist
All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.
Please see Pull Request Reviewer section in the contributing guide for more information on how to review a pull request.
I have...
Summary by CodeRabbit
New Features
Improvements
Documentation
CHANGELOG.md
to reflect recent improvements and new features.Tests