-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ddl: fix ExistsTableRow and add tests for skip reorg checks #57778
Conversation
Hi @tangenta. Thanks for your PR. PRs from untrusted users cannot be marked as trusted with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
pkg/ddl/reorg.go
Outdated
err = result.Next(ctx.ddlJobCtx, chk) | ||
func existsTableRow(ctx *ReorgContext, store kv.Storage, tbl table.PhysicalTable) (bool, error) { | ||
found := false | ||
err := iterateSnapshotKeys(ctx, store, kv.PriorityLow, tbl.RecordPrefix(), math.MaxUint64, nil, nil, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A safer way is to fetch and use a ts from pd like pdCli.GetTS(ctx)
. math.MaxUint64
cannot provide snapshot semantics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we don't need snapshot semantics here. This function determines if we can use the fast path for data reorganization:
- if it returns false, but actually it is true, we won't enter the fast path
- if it returns true, but actually it is false, we enter the fast path, and the following insertions will be handled by dual-write mechanism (txn mode) or temp-index-merge mechanism (ingest mode). This is covered by test https://github.com/pingcap/tidb/pull/57778/files#diff-2998ac87ef4adb20bfc6bf8ce58ace7015800e57f330bacad08cba4b90350300R908
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another layer of semantics for safety is that, in the transaction layer and TiKV, the use of math.MaxUint64
is only expected for primary key point-get autocommit reads. Other scenarios should not use math.MaxUint64
for reads. It's better to stick to PD-allocated timestamps instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we don't need snapshot semantics here.
Did you mean that you don't need a consistent snapshot, so you try to use a weaker or looser way for the reading?
I think reading with MaxUint64 seems to be much more special than you, as well as many other developers, might think. At least it is so after implementing async commit. Maybe it's better if you only consider whether the overhead of getting a new ts from PD is so unacceptable to you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reading with MaxUint64 seems to be much more special
can you give more detail? or some doc link?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the use of
math.MaxUint64
is only expected for primary key point-get autocommit reads. Other scenarios should not usemath.MaxUint64
for reads
In my understanding, snapshot reading is not a pure "reading" at TiKV. It updates the Max TS so caused side effect to async commit. #57769 https://cn.pingcap.com/blog/async-commit-principle/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my understand, snapshot reading is not a pure "reading" at TiKV. It has side effect to async commit. #57769
that's too bad, a read operation corrupt the entire cluster
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you give more detail? or some doc link?
There's not any document dedicated to talking about this as far as I know. There's many special handling for reading with MaxUint64. For example, TiKV skipping updating the max_ts using it; special lock-ignoring logic when checking read-write conflict; etc. It also used to cause difficulty for us in developent or optimization work.
snapshot reading is not a pure "reading" at TiKV.
The max_ts in async commit / 1PC is an approach to let writing transactions to avoid affecting other reading operations. It tells the latest timestamp allocated by PD that has been used on this TiKV node. It's quite doubtful to me whether this can be the reason to say that the reading is not "pure".
that's too bad
The design is based on the fact that the reads and writes are strongly ordered by PD's globally unique and monotonic TSO allocation, and writing cannot change the snapshot has been read, otherwise the snapshot becomes not repeat-readable, then it's no longer a snapshot. You may think that this makes async commit transactions committed to an incorrect future version and cause its result not readable, and things would be all better if async commit not exists. But that's the choice that async commit transactions made to avoid breaking your snapshot. If the transaction is committed in 2PC mode which is not related to the max_ts mechanism, there's still another form of corruption, that is, your snapshot is changed and becomes not repeat-readable. If you believe there's something bad, I think the bad part is that the strong ordering that's supposed to be provided by TSO is broken.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for bad
, i mean a read only operation
has side effects, and it corrupt the entire cluster
read only operation
shouldn't have such effects as a common design practice AFAIK
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #57778 +/- ##
================================================
+ Coverage 72.8370% 74.8316% +1.9945%
================================================
Files 1677 1722 +45
Lines 464191 474115 +9924
================================================
+ Hits 338103 354788 +16685
+ Misses 105210 97131 -8079
- Partials 20878 22196 +1318
Flags with carried forward coverage won't be shown. Click here to find out more.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rest lgtm except existing comments
[LGTM Timeline notifier]Timeline:
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: crazycs520, D3Hunter, MyonKeminta The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
@tangenta: Cannot trigger testing until a trusted user reviews the PR and leaves an In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
In response to a cherrypick label: new pull request created to branch |
What problem does this PR solve?
Issue Number: close #57769
Problem Summary:
What changed and how does it work?
Use the transaction key-value interface instead to check if a table is empty.
Check List
Tests
Side effects
Documentation
Release note
Please refer to Release Notes Language Style Guide to write a quality release note.