-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replayed pessimistic lock requests using WakeUpModeForceLock
may lead to correctness issue
#14311
Comments
A possible solution: We assume that for a transaction from TiDB, the amount of keys locked in fair locking mode won't be too large (so that our memory usage is acceptable). Then: client-go:
message ForUpdateTSConstraint {
uint32 index = 0;
uint64 expected_for_update_ts = 1;
}
message PrewriteRequest {
// ...
repeated ForUpdateTSConstraint for_update_ts_constraints = ...;
} For each item in Update: The constraint will be defined as |
When could |
I think this can be allowed since that we can only say there is unexpected write conflict (thus there might be data consistency issue) when
|
@MyonKeminta |
Sorry I didn't get it. It seems to me that, if the second one performs existence check or returns value, then the lock is lost and a stale request of step 1 succeeded, it can be said that value returned or checked by the second pessimistic lock request is still valid since no write conflict occurs. |
Consider this case:
The pessimistic lock request in step 5 should fail in normal path, as the |
If a pessimisitc lock request have read the value or checked existence with |
…sts of pessimistic transactions (#42843) ref tikv/tikv#14311, ref #42923
ref #14311 Supports checking for_update_ts for specific keys during prewrite to avoid potential lost update that might be caused by allowing locking with conflict. Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com> Co-authored-by: Ti Chi Robot <ti-community-prow-bot@tidb.io>
Bug Report
When
WakeUpModeForceLock
is used, if a replayed pessimistic lock request (maybe caused by network issue) arrives after a lock is missing (maybe caused by pipelined locking or in-memory pessimistic lock), it's possible to cause correctness issue. The complete procedure to cause the problem is:T1
succeeded withWakeUpModeForceLock
enabled, then it returns to TiDB and TiDB continues its execution.T2
writes the key and committed.T1
that has been received in step 1 (maybe because of retrying due to network issue in step 1). Since it allows locking with conflict, though there's a newer version that's later than the request'sfor_update_ts
, the request can still acquire the lock. However no one will check the result of the request.T1
commits. When it prewrites it checks if each key is pessimistic-locked as expected. It won't notice anything wrong since the key does have a pessimistic lock of the same transaction. ThereforeT1
commits successfully. However, one of the key is locked on a different version. The conflict between transactionT1
andT2
is missed.A possible fix would be: record the
for_update_ts
of each locked key in membuffer in client-go, then carry them in prewrite requests. When TiKV handles pessimistic prewrite requests, check if thefor_update_ts
in the lock in TiKV matches that in the prewrite request. This needs changing the membuffer to store thefor_update_ts
per key, and comsumes 8 bytes memory in addition for each key. It also needs to add a new field in prewrite requests.The text was updated successfully, but these errors were encountered: