Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(store): Parallel write in the CacheMultiStore #20817

Open
wants to merge 13 commits into
base: main
Choose a base branch
from

Conversation

cool-develope
Copy link
Contributor

@cool-develope cool-develope commented Jun 28, 2024

Description

Closes: #20787


Author Checklist

All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.

I have...

  • included the correct type prefix in the PR title, you can find examples of the prefixes below:
  • confirmed ! in the type prefix if API or client breaking change
  • targeted the correct branch (see PR Targeting)
  • provided a link to the relevant issue or specification
  • reviewed "Files changed" and left comments if necessary
  • included the necessary unit and integration tests
  • added a changelog entry to CHANGELOG.md
  • updated the relevant documentation or specification, including comments for documenting Go code
  • confirmed all CI checks have passed

Reviewers Checklist

All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.

Please see Pull Request Reviewer section in the contributing guide for more information on how to review a pull request.

I have...

  • confirmed the correct type prefix in the PR title
  • confirmed all author checklist items have been addressed
  • reviewed state machine logic, API design and naming, documentation is accurate, tests and test coverage

Summary by CodeRabbit

  • New Features

    • Introduced benchmarking tests for cache store performance.
  • Improvements

    • Parallelized the cache store write operations for enhanced performance.
  • Documentation

    • Updated CHANGELOG.md to reflect recent improvements and new features.
  • Tests

    • Added tests for parallel writes in the cache store to ensure data consistency and integrity.

@cool-develope cool-develope requested a review from a team as a code owner June 28, 2024 16:39
Copy link
Contributor

coderabbitai bot commented Jun 28, 2024

📝 Walkthrough
📝 Walkthrough

Walkthrough

The recent changes introduce parallel writes in the CacheMultiStore.Write method by utilizing goroutines and a synchronization mechanism with errgroup. This enhancement significantly improves performance during block execution and syncing. Additionally, new test and benchmark functions were added to validate both the correctness and performance of the parallel write operations.

Changes

File Change Summary
store/rootmulti/store_test.go Added TestCacheMultiStoreWrite to validate parallel writes in cacheMultiStore.
store/cachemulti/store.go Modified Write method to concurrently execute writes using errgroup.Group.
store/cachemulti/benchmark_test.go Introduced benchmark tests to evaluate performance of parallel write operations in cacheMultiStore.
store/go.mod Included golang.org/x/sync module to support parallelism in writes.
store/CHANGELOG.md Updated to reflect the parallelization of CacheMultiStore.Write and the IAVL upgrade to version 1.1.1.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant User
    participant TestCacheMultiStoreWrite
    participant CacheMultiStore
    participant Store
    participant goroutine
    User->>+TestCacheMultiStoreWrite: Run Test
    TestCacheMultiStoreWrite->>+CacheMultiStore: Setup Stores
    CacheMultiStore->>+Store: Write
    Store->>+goroutine: Write
    goroutine->>Store: Complete Write
    goroutine->>+CacheMultiStore: Return result
    CacheMultiStore->>+TestCacheMultiStoreWrite: Verify results
    TestCacheMultiStoreWrite->>User: Test Result
Loading

Assessment against linked issues

Objective (Issue Number) Addressed Explanation
Parallelize CacheMultiStore.Write method (#20787)

This assessment confirms that the code changes effectively meet the objectives outlined in the linked issues, particularly addressing the need for parallel execution to enhance performance during writes.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

This comment has been minimized.

for _, store := range cms.stores {
store.Write()
go func(s types.CacheWrap) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

most of the time, the dirty writes are small or even empty, does it justify the overhead of goroutine?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, it would be helpful if there were only two meaningful writes (two stores), since WorkingHash requires the iavl tree re-building and root hash (one of bottlenecks)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should get some benchmarks.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Outside diff range and nitpick comments (1)
store/CHANGELOG.md (1)

Line range hint 65-65: Consider adding a missing comma for clarity.

There seems to be a missing comma after "go.mod file" which can improve readability.

- which allows it be a standalone module
+ which allows it to be a standalone module,
Review details

Configuration used: .coderabbit.yml
Review profile: CHILL

Commits

Files that changed from the base of the PR and between d426a5d and b8d3211.

Files selected for processing (3)
  • store/CHANGELOG.md (1 hunks)
  • store/cachemulti/store.go (2 hunks)
  • store/rootmulti/store_test.go (1 hunks)
Additional context used
Path-based instructions (3)
store/CHANGELOG.md (1)

Pattern **/*.md: "Assess the documentation for misspellings, grammatical errors, missing documentation and correctness"

store/cachemulti/store.go (1)

Pattern **/*.go: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.

store/rootmulti/store_test.go (2)

Pattern **/*.go: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.


Pattern **/*_test.go: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"

LanguageTool
store/CHANGELOG.md

[uncategorized] ~62-~62: You might be missing the article “a” here.
Context: ...b.com//pull/14645) Add limit to the length of key and value. * [#156...

(AI_EN_LECTOR_MISSING_DETERMINER_A)


[uncategorized] ~65-~65: Possible missing comma found.
Context: ... is extracted to have a separate go.mod file which allows it be a standalone module....

(AI_HYDRA_LEO_MISSING_COMMA)


[grammar] ~65-~65: The preposition ‘to’ may be missing (allow someone to do something).
Context: ... a separate go.mod file which allows it be a standalone module. * [#14410](https:/...

(ALLOW_TO_DO)


[uncategorized] ~66-~66: You might be missing the article “an” here.
Context: ..., it will error if any module store has incorrect height. ### Improvements * [#17158](h...

(AI_EN_LECTOR_MISSING_DETERMINER_AN)

Markdownlint
store/CHANGELOG.md

58-58: Expected: 2; Actual: 4
Unordered list indentation

(MD007, ul-indent)


59-59: Expected: 2; Actual: 4
Unordered list indentation

(MD007, ul-indent)


60-60: Expected: 2; Actual: 4
Unordered list indentation

(MD007, ul-indent)


44-44: Expected: 0 or 2; Actual: 1
Trailing spaces

(MD009, no-trailing-spaces)

Additional comments not posted (3)
store/CHANGELOG.md (1)

37-37: Changelog entry is clear and well-formatted.

The entry for PR #20817 is correctly placed under the "Improvements" section and provides a direct link to the PR, which helps in traceability.

store/cachemulti/store.go (1)

126-134: Ensure proper handling of goroutines for parallel writes.

The implementation uses goroutines correctly with a sync.WaitGroup to ensure all writes complete before proceeding. However, ensure that the store.Write() method is safe to call concurrently on different store instances.

store/rootmulti/store_test.go (1)

1036-1071: Comprehensive test for parallel writes in CacheMultiStore.

The test effectively sets up a scenario to validate the parallel writing capability by setting keys in multiple stores and committing them. It checks data consistency after writes, which is crucial for ensuring that the parallel write feature does not introduce errors.

store/cachemulti/store.go Fixed Show fixed Hide fixed
@@ -116,7 +116,7 @@ func TestCacheMultiStoreWithVersion(t *testing.T) {
// require we cannot commit (write) to a cache-versioned multi-store
require.Panics(t, func() {
kvStore.Set(k, []byte("newValue"))
cms.Write()
kvStore.(types.CacheWrap).Write()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, I realized recovering panics from multi goroutines is tricky, @alpe have you any idea?
the above update is same check with original one, but curious

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume, you want to test for cms.Write() to panic. Your workaround with kvStore.(types.CacheWrap).Write() is pragmatic but with a cms.Write() you give away control from the main thread. Therefore the caller can not handle panics which is the bigger problem.
Another issue would be deterministic behaviour when more than one store panics.
You could recover in the go routines, collect and sort the errors. Maybe it is worth to refactor to error return values instead of panics.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point!

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: .coderabbit.yml
Review profile: CHILL

Commits

Files that changed from the base of the PR and between b8d3211 and 0e5be9f.

Files selected for processing (1)
  • store/rootmulti/store_test.go (2 hunks)
Files skipped from review as they are similar to previous changes (1)
  • store/rootmulti/store_test.go

ValarDragon added a commit to osmosis-labs/cosmos-sdk that referenced this pull request Jul 3, 2024
@cool-develope
Copy link
Contributor Author

benchmark results


goos: linux
goarch: amd64
pkg: cosmossdk.io/store/cachemulti
cpu: Intel(R) Core(TM) i5-9400 CPU @ 2.90GHz
                                               │   old.txt    │               new.txt                │
                                               │    sec/op    │    sec/op     vs base                │
CacheMultiStore/storeCount=2/keyCount=100-6       246.9µ ± 7%   152.8µ ±  1%  -38.12% (p=0.000 n=10)
CacheMultiStore/storeCount=2/keyCount=1000-6      2.669m ± 1%   1.655m ±  1%  -37.99% (p=0.000 n=10)
CacheMultiStore/storeCount=2/keyCount=10000-6     36.37m ± 1%   22.02m ±  2%  -39.45% (p=0.000 n=10)
CacheMultiStore/storeCount=4/keyCount=100-6       419.2µ ± 2%   244.7µ ±  1%  -41.61% (p=0.000 n=10)
CacheMultiStore/storeCount=4/keyCount=1000-6      5.582m ± 1%   2.928m ±  1%  -47.55% (p=0.000 n=10)
CacheMultiStore/storeCount=4/keyCount=10000-6     75.53m ± 2%   33.72m ±  1%  -55.35% (p=0.000 n=10)
CacheMultiStore/storeCount=8/keyCount=100-6       849.9µ ± 2%   393.8µ ±  3%  -53.67% (p=0.000 n=10)
CacheMultiStore/storeCount=8/keyCount=1000-6     11.605m ± 1%   5.332m ±  4%  -54.05% (p=0.000 n=10)
CacheMultiStore/storeCount=8/keyCount=10000-6    159.24m ± 2%   72.41m ±  2%  -54.52% (p=0.000 n=10)
CacheMultiStore/storeCount=16/keyCount=100-6     1766.7µ ± 6%   818.7µ ±  5%  -53.66% (p=0.000 n=10)
CacheMultiStore/storeCount=16/keyCount=1000-6     23.45m ± 1%   10.91m ±  4%  -53.45% (p=0.000 n=10)
CacheMultiStore/storeCount=16/keyCount=10000-6    334.7m ± 2%   149.4m ±  8%  -55.35% (p=0.000 n=10)
CacheMultiStore/storeCount=32/keyCount=100-6      3.606m ± 2%   1.562m ±  7%  -56.68% (p=0.000 n=10)
CacheMultiStore/storeCount=32/keyCount=1000-6     47.39m ± 0%   20.29m ±  6%  -57.18% (p=0.000 n=10)
CacheMultiStore/storeCount=32/keyCount=10000-6    695.4m ± 1%   314.9m ± 16%  -54.72% (p=0.000 n=10)
geomean                                           11.73m        5.786m        -50.68%

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Review details

Configuration used: .coderabbit.yml
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 0e5be9f and d1970ba.

Files selected for processing (1)
  • store/cachemulti/benchmark_test.go (1 hunks)
Additional context used
Path-based instructions (1)
store/cachemulti/benchmark_test.go (2)

Pattern **/*.go: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.


Pattern **/*_test.go: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"

Additional comments not posted (1)
store/cachemulti/benchmark_test.go (1)

51-61: LGTM!

The benchmark function follows best practices and is well-structured.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Outside diff range, codebase verification and nitpick comments (2)
store/cachemulti/store.go (1)

126-140: Review the parallel execution logic in the Write method.

The implementation uses errgroup.Group for managing parallel writes, which is a robust choice. However, ensure that the error handling and panic recovery mechanisms are correctly implemented to prevent data corruption and ensure stability.

Additionally, consider logging the error before panicking to aid in debugging in production environments.

- panic(err)
+ log.Errorf("error during parallel write: %v", err)
+ panic(err)
store/rootmulti/store_test.go (1)

1036-1071: Assess the new test function for parallel writes.

The test function TestCacheMultiStoreWrite is well-structured and covers the functionality of parallel writes extensively. It checks for data consistency after writes, which is crucial for validating the correctness of the implementation.

Consider adding more edge cases, such as concurrent writes to the same keys, to further ensure robustness.

Review details

Configuration used: .coderabbit.yml
Review profile: CHILL

Commits

Files that changed from the base of the PR and between d1970ba and f9baf60.

Files selected for processing (4)
  • store/cachemulti/benchmark_test.go (1 hunks)
  • store/cachemulti/store.go (2 hunks)
  • store/go.mod (1 hunks)
  • store/rootmulti/store_test.go (1 hunks)
Additional context used
Path-based instructions (3)
store/cachemulti/benchmark_test.go (2)

Pattern **/*.go: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.


Pattern **/*_test.go: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"

store/cachemulti/store.go (1)

Pattern **/*.go: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.

store/rootmulti/store_test.go (2)

Pattern **/*.go: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.


Pattern **/*_test.go: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"

Additional comments not posted (1)
store/go.mod (1)

25-25: Validate the addition of the golang.org/x/sync module.

The addition of golang.org/x/sync at version v0.7.0 is crucial for the parallel write feature introduced in this PR. Ensure that this version is compatible with other dependencies and does not introduce any conflicts.

Comment on lines 15 to 30
func setupStore(b *testing.B, storeCount uint) (Store, map[string]types.StoreKey) {
b.Helper()

db := dbm.NewMemDB()
storeKeys := make(map[string]types.StoreKey)
stores := make(map[types.StoreKey]types.CacheWrapper)
for i := uint(0); i < storeCount; i++ {
key := types.NewKVStoreKey(fmt.Sprintf("store%d", i))
storeKeys[key.Name()] = key
sdb := dbm.NewPrefixDB(db, []byte(key.Name()))
istore, err := iavl.LoadStore(sdb, log.NewNopLogger(), key, types.CommitID{}, 1000, false, nil)
require.NoError(b, err)
stores[key] = types.KVStore(istore)
}

return NewStore(db, stores, storeKeys, nil, types.TraceContext{}), storeKeys
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider preallocating maps for performance optimization.

Given the known size of storeCount, preallocating the storeKeys and stores maps can improve performance by reducing the need for dynamic memory allocation.

- storeKeys := make(map[string]types.StoreKey)
- stores := make(map[types.StoreKey]types.CacheWrapper)
+ storeKeys := make(map[string]types.StoreKey, storeCount)
+ stores := make(map[types.StoreKey]types.CacheWrapper, storeCount)
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func setupStore(b *testing.B, storeCount uint) (Store, map[string]types.StoreKey) {
b.Helper()
db := dbm.NewMemDB()
storeKeys := make(map[string]types.StoreKey)
stores := make(map[types.StoreKey]types.CacheWrapper)
for i := uint(0); i < storeCount; i++ {
key := types.NewKVStoreKey(fmt.Sprintf("store%d", i))
storeKeys[key.Name()] = key
sdb := dbm.NewPrefixDB(db, []byte(key.Name()))
istore, err := iavl.LoadStore(sdb, log.NewNopLogger(), key, types.CommitID{}, 1000, false, nil)
require.NoError(b, err)
stores[key] = types.KVStore(istore)
}
return NewStore(db, stores, storeKeys, nil, types.TraceContext{}), storeKeys
func setupStore(b *testing.B, storeCount uint) (Store, map[string]types.StoreKey) {
b.Helper()
db := dbm.NewMemDB()
storeKeys := make(map[string]types.StoreKey, storeCount)
stores := make(map[types.StoreKey]types.CacheWrapper, storeCount)
for i := uint(0); i < storeCount; i++ {
key := types.NewKVStoreKey(fmt.Sprintf("store%d", i))
storeKeys[key.Name()] = key
sdb := dbm.NewPrefixDB(db, []byte(key.Name()))
istore, err := iavl.LoadStore(sdb, log.NewNopLogger(), key, types.CommitID{}, 1000, false, nil)
require.NoError(b, err)
stores[key] = types.KVStore(istore)
}
return NewStore(db, stores, storeKeys, nil, types.TraceContext{}), storeKeys

func setupStore(b *testing.B, storeCount uint) (Store, map[string]types.StoreKey) {
b.Helper()

db := dbm.NewMemDB()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Testing against a LSM db would be more illustrative of performance improvements, let's try leveldb and pebbledb?

Copy link
Contributor Author

@cool-develope cool-develope Jul 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is not for i/o, mostly for tree updates in memory, in WorkingHash liefcycle

Copy link
Contributor

@alpe alpe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice updates.

return nil
})
}
if err := eg.Wait(); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if the panic is handled somewhere else in case: this is not deterministic for multiple errors.

In the godoc for Wait:

returns the first non-nil error (if any) from them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we don't need to care about the multiple errors, the Write is the memory action and it won't affect the finalized state


for _, storeCount := range storeCounts {
for _, keyCount := range keyCounts {
b.Run(fmt.Sprintf("storeCount=%d/keyCount=%d", storeCount, keyCount), func(sub *testing.B) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One argument was that there will be a number of untouched stores where the Go routine may add significant overhead. It would be good to add that to your test for a fair comparison.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it can be covered by the storeCount, for example if there are 2 significant writes among 10 stores, then it will be saved by 37% from the result. right?

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Outside diff range, codebase verification and nitpick comments (3)
store/rootmulti/store_test.go (3)

1036-1039: Ensure proper cleanup of resources.

It's a good practice to close the database connection after the test to avoid resource leaks.

db, err := dbm.NewDB("test", dbm.GoLevelDBBackend, t.TempDir())
require.NoError(t, err)
+ defer db.Close()

1041-1042: Add a comment to explain the purpose of pruningtypes.PruningNothing.

Adding a comment will help future maintainers understand why this specific pruning option is used.

ms := newMultiStoreWithMounts(db, pruningtypes.NewPruningOptions(pruningtypes.PruningNothing))
+ // Using PruningNothing to ensure all versions are kept for the test
require.NoError(t, ms.LoadLatestVersion())

1046-1048: Consider making toVersion and keyCount constants.

Defining these values as constants improves readability and maintainability.

+ const (
+     toVersion = 100
+     keyCount = 100
+ )
toVersion := int64(100)
keyCount := 100
storeKeys := []types.StoreKey{testStoreKey1, testStoreKey2, testStoreKey3}
Review details

Configuration used: .coderabbit.yml
Review profile: CHILL

Commits

Files that changed from the base of the PR and between f9baf60 and f19c252.

Files selected for processing (2)
  • store/go.mod (1 hunks)
  • store/rootmulti/store_test.go (1 hunks)
Files skipped from review as they are similar to previous changes (1)
  • store/go.mod
Additional context used
Path-based instructions (1)
store/rootmulti/store_test.go (2)

Pattern **/*.go: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.


Pattern **/*_test.go: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"

Comment on lines +1060 to +1070
// check the data
for _, storeKey := range storeKeys {
store := cacheMulti.GetKVStore(storeKey)
for i := int64(1); i <= toVersion; i++ {
for j := 0; j < keyCount; j++ {
key := []byte(fmt.Sprintf("key-%d-%d", i, j))
value := store.Get(key)
require.Equal(t, []byte(fmt.Sprintf("value-%d-%d", i, j)), value)
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Enhance readability by extracting the verification loop into a helper function.

This will make the test function more concise and easier to understand.

+ func verifyStoreData(t *testing.T, cacheMulti types.CacheMultiStore, storeKeys []types.StoreKey, toVersion int64, keyCount int) {
+     for _, storeKey := range storeKeys {
+         store := cacheMulti.GetKVStore(storeKey)
+         for i := int64(1); i <= toVersion; i++ {
+             for j := 0; j < keyCount; j++ {
+                 key := []byte(fmt.Sprintf("key-%d-%d", i, j))
+                 value := store.Get(key)
+                 require.Equal(t, []byte(fmt.Sprintf("value-%d-%d", i, j)), value)
+             }
+         }
+     }
+ }

for _, storeKey := range storeKeys {
    store := cacheMulti.GetKVStore(storeKey)
    for i := int64(1); i <= toVersion; i++ {
        for j := 0; j < keyCount; j++ {
            key := []byte(fmt.Sprintf("key-%d-%d", i, j))
            value := store.Get(key)
            require.Equal(t, []byte(fmt.Sprintf("value-%d-%d", i, j)), value)
        }
    }
}
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// check the data
for _, storeKey := range storeKeys {
store := cacheMulti.GetKVStore(storeKey)
for i := int64(1); i <= toVersion; i++ {
for j := 0; j < keyCount; j++ {
key := []byte(fmt.Sprintf("key-%d-%d", i, j))
value := store.Get(key)
require.Equal(t, []byte(fmt.Sprintf("value-%d-%d", i, j)), value)
}
}
}
func verifyStoreData(t *testing.T, cacheMulti types.CacheMultiStore, storeKeys []types.StoreKey, toVersion int64, keyCount int) {
for _, storeKey := range storeKeys {
store := cacheMulti.GetKVStore(storeKey)
for i := int64(1); i <= toVersion; i++ {
for j := 0; j < keyCount; j++ {
key := []byte(fmt.Sprintf("key-%d-%d", i, j))
value := store.Get(key)
require.Equal(t, []byte(fmt.Sprintf("value-%d-%d", i, j)), value)
}
}
}
}
// check the data
verifyStoreData(t, cacheMulti, storeKeys, toVersion, keyCount)

Comment on lines +1049 to +1058
for i := int64(1); i <= toVersion; i++ {
for _, storeKey := range storeKeys {
store := cacheMulti.GetKVStore(storeKey)
for j := 0; j < keyCount; j++ {
store.Set([]byte(fmt.Sprintf("key-%d-%d", i, j)), []byte(fmt.Sprintf("value-%d-%d", i, j)))
}
}
cacheMulti.Write()
ms.Commit()
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Enhance readability by extracting the nested loops into a helper function.

This will make the test function more concise and easier to understand.

+ func populateStore(t *testing.T, cacheMulti types.CacheMultiStore, storeKeys []types.StoreKey, toVersion int64, keyCount int) {
+     for i := int64(1); i <= toVersion; i++ {
+         for _, storeKey := range storeKeys {
+             store := cacheMulti.GetKVStore(storeKey)
+             for j := 0; j < keyCount; j++ {
+                 store.Set([]byte(fmt.Sprintf("key-%d-%d", i, j)), []byte(fmt.Sprintf("value-%d-%d", i, j)))
+             }
+         }
+         cacheMulti.Write()
+         ms.Commit()
+     }
+ }

for i := int64(1); i <= toVersion; i++ {
    for _, storeKey := range storeKeys {
        store := cacheMulti.GetKVStore(storeKey)
        for j := 0; j < keyCount; j++ {
            store.Set([]byte(fmt.Sprintf("key-%d-%d", i, j)), []byte(fmt.Sprintf("value-%d-%d", i, j)))
        }
    }
    cacheMulti.Write()
    ms.Commit()
}
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for i := int64(1); i <= toVersion; i++ {
for _, storeKey := range storeKeys {
store := cacheMulti.GetKVStore(storeKey)
for j := 0; j < keyCount; j++ {
store.Set([]byte(fmt.Sprintf("key-%d-%d", i, j)), []byte(fmt.Sprintf("value-%d-%d", i, j)))
}
}
cacheMulti.Write()
ms.Commit()
}
func populateStore(t *testing.T, cacheMulti types.CacheMultiStore, storeKeys []types.StoreKey, toVersion int64, keyCount int) {
for i := int64(1); i <= toVersion; i++ {
for _, storeKey := range storeKeys {
store := cacheMulti.GetKVStore(storeKey)
for j := 0; j < keyCount; j++ {
store.Set([]byte(fmt.Sprintf("key-%d-%d", i, j)), []byte(fmt.Sprintf("value-%d-%d", i, j)))
}
}
cacheMulti.Write()
ms.Commit()
}
}
for i := int64(1); i <= toVersion; i++ {
for _, storeKey := storeKeys {
store := cacheMulti.GetKVStore(storeKey)
for j := 0; j < keyCount; j++ {
store.Set([]byte(fmt.Sprintf("key-%d-%d", i, j)), []byte(fmt.Sprintf("value-%d-%d", i, j)))
}
}
cacheMulti.Write()
ms.Commit()
}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Review details

Configuration used: .coderabbit.yml
Review profile: CHILL

Commits

Files that changed from the base of the PR and between f19c252 and a8908fe.

Files selected for processing (1)
  • store/cachemulti/benchmark_test.go (1 hunks)
Additional context used
Path-based instructions (1)
store/cachemulti/benchmark_test.go (2)

Pattern **/*.go: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.


Pattern **/*_test.go: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"

Additional comments not posted (1)
store/cachemulti/benchmark_test.go (1)

54-65: LGTM!

The BenchmarkCacheMultiStore function is well-structured and effectively tests different configurations.

Comment on lines +19 to +20
storeKeys := make(map[string]types.StoreKey)
stores := make(map[types.StoreKey]types.CacheWrapper)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Preallocate maps to improve performance.

Preallocating the storeKeys and stores maps with the known size storeCount can improve performance.

-  storeKeys := make(map[string]types.StoreKey)
-  stores := make(map[types.StoreKey]types.CacheWrapper)
+  storeKeys := make(map[string]types.StoreKey, storeCount)
+  stores := make(map[types.StoreKey]types.CacheWrapper, storeCount)
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
storeKeys := make(map[string]types.StoreKey)
stores := make(map[types.StoreKey]types.CacheWrapper)
storeKeys := make(map[string]types.StoreKey, storeCount)
stores := make(map[types.StoreKey]types.CacheWrapper, storeCount)

Comment on lines 39 to 50
for i := 0; i < b.N; i++ {
b.StopTimer()
for _, key := range storeKeys {
cstore := store.GetKVStore(key)
for j := uint(0); j < keyCount; j++ {
dataKey := fmt.Sprintf("key%s-%d", key.Name(), j)
dataValue := fmt.Sprintf("value%s-%d", key.Name(), j)
cstore.Set([]byte(dataKey), []byte(dataValue))
}
}
b.StartTimer()
store.Write()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Improve readability and performance of timer management in benchmarks.

Moving the StopTimer and StartTimer calls outside the inner loop can help in accurately measuring the time taken for store.Write() without the overhead of setup operations.

- for i := 0; i < b.N; i++ {
-   b.StopTimer()
-   for _, key := range storeKeys {
-     cstore := store.GetKVStore(key)
-     for j := uint(0); j < keyCount; j++ {
-       dataKey := fmt.Sprintf("key%s-%d", key.Name(), j)
-       dataValue := fmt.Sprintf("value%s-%d", key.Name(), j)
-       cstore.Set([]byte(dataKey), []byte(dataValue))
-     }
-   }
-   b.StartTimer()
-   store.Write()
- }
+ for i := 0; i < b.N; i++ {
+   for _, key := range storeKeys {
+     cstore := store.GetKVStore(key)
+     b.StopTimer()
+     for j := uint(0); j < keyCount; j++ {
+       dataKey := fmt.Sprintf("key%s-%d", key.Name(), j)
+       dataValue := fmt.Sprintf("value%s-%d", key.Name(), j)
+       cstore.Set([]byte(dataKey), []byte(dataValue))
+     }
+     b.StartTimer()
+   }
+   store.Write()
+ }
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for i := 0; i < b.N; i++ {
b.StopTimer()
for _, key := range storeKeys {
cstore := store.GetKVStore(key)
for j := uint(0); j < keyCount; j++ {
dataKey := fmt.Sprintf("key%s-%d", key.Name(), j)
dataValue := fmt.Sprintf("value%s-%d", key.Name(), j)
cstore.Set([]byte(dataKey), []byte(dataValue))
}
}
b.StartTimer()
store.Write()
for i := 0; i < b.N; i++ {
for _, key := range storeKeys {
cstore := store.GetKVStore(key)
b.StopTimer()
for j := uint(0); j < keyCount; j++ {
dataKey := fmt.Sprintf("key%s-%d", key.Name(), j)
dataValue := fmt.Sprintf("value%s-%d", key.Name(), j)
cstore.Set([]byte(dataKey), []byte(dataValue))
}
b.StartTimer()
}
store.Write()
}

Copy link
Contributor

github-actions bot commented Sep 8, 2024

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the Stale label Sep 8, 2024
@julienrbrt julienrbrt removed the Stale label Sep 9, 2024
@julienrbrt
Copy link
Member

@kocubinski are you able to review this?

@@ -122,8 +123,21 @@ func (cms Store) GetStoreType() types.StoreType {
// Write calls Write on each underlying store.
func (cms Store) Write() {
cms.db.Write()
eg := new(errgroup.Group)
for _, store := range cms.stores {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should there be a concurrency limit? or is it good enough to delegate this work completely to the go scheduler?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should do some benchmarks with databases and come up with a optimal number of runners

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as we can see in #20817 (comment), from 8 stores, there is not much improvement, let me benchmark with limited number of runners

@cool-develope
Copy link
Contributor Author

There is not much difference between 4 and 8, 4 is preferred considering the other processes.

goos: linux
goarch: amd64
pkg: cosmossdk.io/store/cachemulti
cpu: Intel(R) Core(TM) i5-9400 CPU @ 2.90GHz
BenchmarkCacheMultiStore/storeCount=2/runnerCount=1/keyCount=100/-6                 3907            260294 ns/op          275734 B/op           2363 allocs/op
BenchmarkCacheMultiStore/storeCount=2/runnerCount=2/keyCount=100/-6                 6368            200892 ns/op          275725 B/op           2363 allocs/op
BenchmarkCacheMultiStore/storeCount=2/runnerCount=4/keyCount=100/-6                 5971            211997 ns/op          275722 B/op           2363 allocs/op
BenchmarkCacheMultiStore/storeCount=2/runnerCount=8/keyCount=100/-6                 5683            214146 ns/op          275721 B/op           2363 allocs/op
BenchmarkCacheMultiStore/storeCount=2/runnerCount=16/keyCount=100/-6                5072            208375 ns/op          275723 B/op           2363 allocs/op
BenchmarkCacheMultiStore/storeCount=2/runnerCount=1/keyCount=1000/-6                 319           3619214 ns/op         3789254 B/op          29996 allocs/op
BenchmarkCacheMultiStore/storeCount=2/runnerCount=2/keyCount=1000/-6                 518           2343257 ns/op         3788152 B/op          29986 allocs/op
BenchmarkCacheMultiStore/storeCount=2/runnerCount=4/keyCount=1000/-6                 435           2639528 ns/op         3788504 B/op          29989 allocs/op
BenchmarkCacheMultiStore/storeCount=2/runnerCount=8/keyCount=1000/-6                 339           2956784 ns/op         3789113 B/op          29994 allocs/op
BenchmarkCacheMultiStore/storeCount=2/runnerCount=16/keyCount=1000/-6                421           3412921 ns/op         3788566 B/op          29990 allocs/op
BenchmarkCacheMultiStore/storeCount=2/runnerCount=1/keyCount=10000/-6                 24          49932923 ns/op        49056755 B/op         370604 allocs/op
BenchmarkCacheMultiStore/storeCount=2/runnerCount=2/keyCount=10000/-6                 32          34134558 ns/op        48965104 B/op         369766 allocs/op
BenchmarkCacheMultiStore/storeCount=2/runnerCount=4/keyCount=10000/-6                 31          35170693 ns/op        48974155 B/op         369848 allocs/op
BenchmarkCacheMultiStore/storeCount=2/runnerCount=8/keyCount=10000/-6                 32          37891220 ns/op        48965171 B/op         369767 allocs/op
BenchmarkCacheMultiStore/storeCount=2/runnerCount=16/keyCount=10000/-6                28          41114230 ns/op        49004366 B/op         370125 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=1/keyCount=100/-6                 1785            706657 ns/op          551236 B/op           4721 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=2/keyCount=100/-6                 2250            600603 ns/op          551229 B/op           4721 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=4/keyCount=100/-6                 3740            408612 ns/op          551193 B/op           4721 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=8/keyCount=100/-6                 3774            361486 ns/op          551196 B/op           4721 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=16/keyCount=100/-6                3253            402494 ns/op          551195 B/op           4721 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=1/keyCount=1000/-6                 150           7818434 ns/op         7584749 B/op          60043 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=2/keyCount=1000/-6                 210           5981785 ns/op         7581280 B/op          60013 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=4/keyCount=1000/-6                 213           5487895 ns/op         7581188 B/op          60012 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=8/keyCount=1000/-6                 205           7046538 ns/op         7581481 B/op          60015 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=16/keyCount=1000/-6                165           6902530 ns/op         7583643 B/op          60034 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=1/keyCount=10000/-6                 10         111889885 ns/op        99141616 B/op         750589 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=2/keyCount=10000/-6                 16          86675022 ns/op        98481313 B/op         744557 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=4/keyCount=10000/-6                 14          77748328 ns/op        98638515 B/op         745995 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=8/keyCount=10000/-6                 13          80813647 ns/op        98736060 B/op         746882 allocs/op
BenchmarkCacheMultiStore/storeCount=4/runnerCount=16/keyCount=10000/-6                16          88198167 ns/op        98480820 B/op         744557 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=1/keyCount=100/-6                  672           1613443 ns/op         1102485 B/op           9441 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=2/keyCount=100/-6                 1702           1028892 ns/op         1102237 B/op           9439 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=4/keyCount=100/-6                 2311            498774 ns/op         1102175 B/op           9438 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=8/keyCount=100/-6                 2618            466472 ns/op         1102167 B/op           9438 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=16/keyCount=100/-6                2548            464280 ns/op         1102159 B/op           9438 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=1/keyCount=1000/-6                 102          11649624 ns/op        15180825 B/op         120183 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=2/keyCount=1000/-6                 174           6810251 ns/op        15166188 B/op         120054 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=4/keyCount=1000/-6                 235           5245776 ns/op        15160433 B/op         120006 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=8/keyCount=1000/-6                 231           5238806 ns/op        15160700 B/op         120008 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=16/keyCount=1000/-6                238           5171895 ns/op        15160258 B/op         120004 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=1/keyCount=10000/-6                  7         156892638 ns/op        199794545 B/op       1514976 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=2/keyCount=10000/-6                 12         100983178 ns/op        197695898 B/op       1495816 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=4/keyCount=10000/-6                 15          75007304 ns/op        197108264 B/op       1490452 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=8/keyCount=10000/-6                 15          73913889 ns/op        197108512 B/op       1490451 allocs/op
BenchmarkCacheMultiStore/storeCount=8/runnerCount=16/keyCount=10000/-6                14          74020468 ns/op        197275265 B/op       1491979 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=1/keyCount=100/-6                 612           1917963 ns/op         2204809 B/op          18879 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=2/keyCount=100/-6                 956           1238215 ns/op         2204478 B/op          18875 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=4/keyCount=100/-6                1413            860827 ns/op         2204265 B/op          18873 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=8/keyCount=100/-6                1508            811201 ns/op         2204245 B/op          18873 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=16/keyCount=100/-6               1488            791038 ns/op         2204251 B/op          18873 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=1/keyCount=1000/-6                 49          25373938 ns/op        30439657 B/op         241042 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=2/keyCount=1000/-6                 70          15260570 ns/op        30394551 B/op         240650 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=4/keyCount=1000/-6                109          11161744 ns/op        30356934 B/op         240322 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=8/keyCount=1000/-6                111          10605896 ns/op        30355592 B/op         240311 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=16/keyCount=1000/-6               111          10624984 ns/op        30355860 B/op         240313 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=1/keyCount=10000/-6                 3         354666802 ns/op        413028280 B/op       3152588 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=2/keyCount=10000/-6                 5         207810651 ns/op        403618193 B/op       3066736 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=4/keyCount=10000/-6                 7         157350286 ns/op        399591182 B/op       3029943 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=8/keyCount=10000/-6                 7         150396138 ns/op        399592122 B/op       3029962 allocs/op
BenchmarkCacheMultiStore/storeCount=16/runnerCount=16/keyCount=10000/-6                7         153334404 ns/op        399594602 B/op       3029972 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=1/keyCount=100/-6                 298           3788698 ns/op         4411517 B/op          37774 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=2/keyCount=100/-6                 508           2344264 ns/op         4409827 B/op          37757 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=4/keyCount=100/-6                 706           1640998 ns/op         4409155 B/op          37750 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=8/keyCount=100/-6                 804           1448047 ns/op         4408892 B/op          37748 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=16/keyCount=100/-6                787           1502624 ns/op         4408933 B/op          37749 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=1/keyCount=1000/-6                 24          48500490 ns/op        61192576 B/op         484807 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=2/keyCount=1000/-6                 39          29487427 ns/op        60956405 B/op         482752 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=4/keyCount=1000/-6                 49          21325659 ns/op        60880274 B/op         482082 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=8/keyCount=1000/-6                 58          21538274 ns/op        60832772 B/op         481676 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=16/keyCount=1000/-6                56          21291439 ns/op        60842563 B/op         481755 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=1/keyCount=10000/-6                 2         746240124 ns/op        849548928 B/op       6519762 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=2/keyCount=10000/-6                 3         423927748 ns/op        826055832 B/op       6305156 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=4/keyCount=10000/-6                 3         334825468 ns/op        826041549 B/op       6305151 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=8/keyCount=10000/-6                 4         313121734 ns/op        814298464 B/op       6197895 allocs/op
BenchmarkCacheMultiStore/storeCount=32/runnerCount=16/keyCount=10000/-6                3         348477385 ns/op        826048968 B/op       6305179 allocs/op

Comment on lines +142 to +157
for storeKey, store := range cms.stores {
wg.Add(1)
sem <- struct{}{}

go func() {
defer func() {
wg.Done()
<-sem // Release the slot

if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in Write for store %s: %v", storeKey.Name(), r)
}
}()
store.Write()
}()
}

Check warning

Code scanning / CodeQL

Iteration over map Warning

Iteration over map may be a possible source of non-determinism
Comment on lines +146 to +156
go func() {
defer func() {
wg.Done()
<-sem // Release the slot

if r := recover(); r != nil {
errChan <- fmt.Errorf("panic in Write for store %s: %v", storeKey.Name(), r)
}
}()
store.Write()
}()

Check notice

Code scanning / CodeQL

Spawning a Go routine Note

Spawning a Go routine may be a possible source of non-determinism
Comment on lines +159 to +162
go func() {
wg.Wait()
close(errChan)
}()

Check notice

Code scanning / CodeQL

Spawning a Go routine Note

Spawning a Go routine may be a possible source of non-determinism
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature]: Run cachemulti.store.write in parallel
8 participants