-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage/concurrency: support idempotent lock acquisition #45277
storage/concurrency: support idempotent lock acquisition #45277
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 3 of 3 files at r1.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @nvanbenschoten)
pkg/storage/concurrency/lock_table.go, line 904 at r1 (raw file):
// the lock acquisition as long as it corresponds to an existing // sequence number. The validity of such a lock re-acquisition // should have already been determined at the MVCC level.
My understanding (mainly based on reading the comment for TxnMeta.Sequence
and Transaction.InFlightWrites
) is that requests for multiple sequence numbers can be concurrently issued. Is that correct, and if yes, is there something ensuring that these concurrent requests don't write or lock the same key? What I am trying to understand is whether seeing an older sequence number that is not in seqs
should be handled by adding that sequence number, or returning an error (as is done here).
pkg/storage/concurrency/lock_table.go, line 1743 at r1 (raw file):
// UUIDs using a counter and makes this output more readable. var seqStr string if txn.Sequence != 0 {
is this just to avoid changing all the existing tests, or is there something special about sequence 0?
This commit updates lockTableImpl to support idempotent acquisition of locks at previous sequence numbers. The storage layer expects this to work, as is tested in TestReplicaTxnIdempotency. The commit also fixes a bug where we were forgetting to add to the sequence slice when updating an existing lock in acquireLock.
d7d5602
to
ade5011
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the review!
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @sumeerbhola)
pkg/storage/concurrency/lock_table.go, line 904 at r1 (raw file):
Requests for multiple sequence numbers can be concurrently issued, but they can't overlap without either a) being in the same batch, which implies increasing sequence number ordering during evaluation or b) being sequenced in increasing sequence number using QueryIntent requests. This second approach is powered by the txnPipeliner
.
What I am trying to understand is whether seeing an older sequence number that is not in seqs should be handled by adding that sequence number, or returning an error (as is done here).
The QueryIntent request attached to the req with the higher sequence number will throw an error in this case, so we shouldn't even make it here. Additionally, if the higher sequence number was written then the lower sequence number would be rejected by MVCC.
pkg/storage/concurrency/lock_table.go, line 1743 at r1 (raw file):
Previously, sumeerbhola wrote…
is this just to avoid changing all the existing tests, or is there something special about sequence 0?
Yes, this is just to avoid changing tests that don't use sequence numbers. There's nothing special about sequence 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 2 of 2 files at r2.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @sumeerbhola)
bors r+ |
Build succeeded |
This commit updates lockTableImpl to support idempotent acquisition
of locks at previous sequence numbers. The storage layer expects this
to work, as is tested in TestReplicaTxnIdempotency.
The commit also fixes a bug where we were forgetting to add to the
sequence slice when updating an existing lock in acquireLock.