-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ddl: implement backoff pool for TiFlash Cluster Management #32317
Conversation
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
Code Coverage Details: https://codecov.io/github/pingcap/tidb/commit/cd623275d4a6360a63bbdad2e7d36270cbcd2d99 |
Signed-off-by: CalvinNeo <calvinneo1995@gmail.com>
/run_unit-test |
1 similar comment
/run_unit-test |
Signed-off-by: CalvinNeo <calvinneo1995@gmail.com>
} | ||
|
||
// NeedGrow returns if we need to grow | ||
func (e *PollTiFlashBackoffElement) NeedGrow() bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we remove this function? It only used in one place and only one line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to export this for testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it.
Signed-off-by: CalvinNeo <calvinneo1995@gmail.com>
ddl/ddl_tiflash_api.go
Outdated
@@ -258,6 +400,11 @@ func (d *ddl) pollTiFlashReplicaStatus(ctx sessionctx.Context, pollTiFlashContex | |||
// We only check unavailable tables here, so doesn't include blocked add partition case. | |||
if !available { | |||
allReplicaReady = false | |||
grown, inqueue, _ := pollTiFlashContext.Backoff.Tick(tb.ID) | |||
if inqueue && !grown { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to sleep here or out of for
statement?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sleeps are in PollTiFlashRoutine, this is a plug-in backoffer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/merge |
This pull request has been accepted and is ready to merge. Commit hash: 9791471
|
/run-unit-test |
What problem does this PR solve?
Issue Number: ref #32254
Problem Summary:
This PR is part of batch setting TiFlash replica in one database. It sorts out big tables that takes long time to be available into a backoff pool, in order to avoid checking every tick.
What is changed and how it works?
When a table is to be checked for availability, we will firstly find if it is in backoff pool, and is determined not to be handled in this round.
When a table is checked to be not available for the first time, it will be added into backoff pool.
When a table is available, it will be removed from backoff pool.
Check List
Tests
Side effects
Documentation
Release note