-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kv/client: fix region loss in single region handler #3281
Conversation
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
/run-all-tests |
/run-integration-tests |
/run-leak-tests |
Codecov Report
@@ Coverage Diff @@
## master #3281 +/- ##
================================================
+ Coverage 56.7226% 56.7345% +0.0119%
================================================
Files 214 214
Lines 22915 22919 +4
================================================
+ Hits 12998 13003 +5
+ Misses 8604 8603 -1
Partials 1313 1313 |
@@ -273,7 +273,7 @@ func (w *regionWorker) handleSingleRegionError(ctx context.Context, err error, s | |||
} | |||
|
|||
revokeToken := !state.initialized | |||
err2 := w.session.onRegionFail(ctx, regionErrorInfo{ | |||
err2 := w.session.onRegionFail(w.parentCtx, regionErrorInfo{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to add some comments here? I see another place where it is used added a comment why we use parentCtx instead of ctx.
See: https://github.com/pingcap/ticdc/blob/37bac66f0673103892c71e585ab3d9d4658c1f74/cdc/kv/region_worker.go#L792
It looks like the changes here are also for this purpose, sorry I'm not very familiar with this piece of code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, the same reason
/merge |
This pull request has been accepted and is ready to merge. Commit hash: dadd865
|
@amyangfei: Your PR was out of date, I have automatically updated it for you. At the same time I will also trigger all tests for you: /run-all-tests If the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
In response to a cherrypick label: new pull request created: #3289. |
In response to a cherrypick label: new pull request created: #3290. |
In response to a cherrypick label: new pull request created: #3291. |
In response to a cherrypick label: new pull request created: #3292. |
In response to a cherrypick label: new pull request created: #3293. |
What problem does this PR solve?
Close #3288
Fix a region loss case, we can reproduce as following steps
tiup cluster restart <cluster-name> -R tikv
command for example.By comparing the initialized regions in TiCDC and all regions by querying
select region_id from information_schema.tikv_region_status where db_name = 'xx' and table_name = 'yy'
from TiDB, we can observe some regions are lost.By querying the lost region id in TiCDC log, we found the region disconnected without reconnect
The root cause is kv client must recycle all failed regions, so we should use the root context of a kv client to call
onRegionFail
This bug tends to happen when multiple TiKVs crash or forcing restart, and based on existing test, one TiKV crashes or restarts doesn't trigger this bug. And the more regions, the higher probability.
What is changed and how it works?
Use the parent context of region worker to call
onRegionFail
when processing region failure.Check List
Tests
Force restart of all TiKV nodes
Release note