Skip to content

fix: reduce SetAddrs shards lock contention #2292

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

AlexanderYastrebov
Copy link
Contributor

Introduces a new lock to make SetAddrs calls exclusive.
This allows release of shards lock for the duration of potentially long newRingShards call.

TestRingSetAddrsContention observes number of pings increased from <1000 to ~40_000.

See #2190 (comment)

Updates #2077

@AlexanderYastrebov
Copy link
Contributor Author

FYI @szuecs

ring.go Outdated
@@ -213,6 +213,7 @@ func (shard *ringShard) Vote(up bool) bool {
type ringSharding struct {
opt *RingOptions

addrsMu sync.Mutex
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please try to make it clear what this mutex protects, e.g. I would expect something like

addrsMu   sync.Mutex
addrs        type

// another group

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After reading the code and the PR comment, I suggest to rename this to setAddrsMu and add an explanation similar to the one you have in the PR description.

@vmihailenco
Copy link
Collaborator

Introduces a new lock to make SetAddrs calls exclusive.
This allows release of shards lock for the duration of potentially long newRingShards call.

Neat trick 👍 but please try to add some comments to the code. Without your explanation it is hard to understand what is happening.

Introduces a new lock to make `SetAddrs` calls exclusive.
This allows release of shards lock for the duration of potentially long `newRingShards` call.

`TestRingSetAddrsContention` observes number of pings increased from <1000 to ~40_000.

See redis#2190 (comment)

Updates redis#2077
@AlexanderYastrebov AlexanderYastrebov force-pushed the ring/reduce-set-addrs-shards-locking-2 branch from c117bb2 to 7c6f677 Compare November 22, 2022 22:58
@AlexanderYastrebov
Copy link
Contributor Author

I've addressed comments and added shard cleanup tests.

c.mu.Lock()
if c.closed {
cleanup(created)
c.mu.Unlock()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wonder if we can call cleanup(created) after the mutex is released?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about it but decided to keep like that - there would not be much difference as the ring is closed anyway at this point. I can move the unlock for aesthetics though.

@vmihailenco
Copy link
Collaborator

Thanks

@vmihailenco vmihailenco merged commit 4c46468 into redis:master Nov 23, 2022
@AlexanderYastrebov AlexanderYastrebov deleted the ring/reduce-set-addrs-shards-locking-2 branch November 23, 2022 10:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants