-
Notifications
You must be signed in to change notification settings - Fork 499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(scaler): scale out by group and wait until pods are scheduled #4907
base: master
Are you sure you want to change the base?
Conversation
[REVIEW NOTIFICATION] This pull request has not been approved. To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #4907 +/- ##
==========================================
- Coverage 59.44% 59.21% -0.24%
==========================================
Files 227 231 +4
Lines 25835 28974 +3139
==========================================
+ Hits 15358 17157 +1799
- Misses 9019 10282 +1263
- Partials 1458 1535 +77
|
/test pull-e2e-kind |
1 similar comment
/test pull-e2e-kind |
if updateReplicasAndDeleteSlots { | ||
setReplicasAndDeleteSlotsByFinished(scalingOutFlag, newSet, oldSet, ordinals, finishedOrdinals) | ||
} else { | ||
resetReplicas(newSet, oldSet) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure whether resetReplicas(newSet, oldSet)
is needed for return controller.RequeueErrorf("tikv.ScaleOut, cluster %s/%s ready to scale out, wait for next round", tc.GetNamespace(), tc.GetName())
in L122.
Is this feature controlled by |
@liubog2008: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What problem does this PR solve?
If there are 3 zones and 9 replicas of TiKV with PodSpreadConstraints, pods may not scheduled as expected. Because all pods of TiKV will be created at the same time and then they will not be scheduled in order.
So pods may be scheduled as below, and then TiKV will fail to scale down more than 1 replica.
What is changed and how does it work?
Code changes
Tests
Side effects
Related changes
Release Notes
Please refer to Release Notes Language Style Guide before writing the release note.