-
Notifications
You must be signed in to change notification settings - Fork 662
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TopologySpreadConstraint does not balance domains on first run #601
Labels
kind/bug
Categorizes issue or PR as related to a bug.
Comments
/assign |
@seanmalloy: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What version of descheduler are you using?
descheduler version: v0.21.0
Does this issue reproduce with the latest release?
Yes
Which descheduler CLI options are you using?
--logging-format text --policy-config-file /policy-dir/policy.yaml --v 4
Please provide a copy of your descheduler policy config file
What k8s version are you using (
kubectl version
)?kubectl version
OutputWhat did you do?
I have a deployment with 15 replicas with the following topologySpreadConstraint.
I simulate an AZ failure and end up with [8, 7, 0] Pods across 3 AZs. After AZ recovery, I would expect Descheduler to balance domains to [5, 5, 5] but it takes 2 run to get to the ideal state.
You can simulate this via a TestCase in the unit test for TopologySpreadConstraint:
What did you expect to see?
After AZ recovery, I would expect 5 evictions to get to [5, 5, 5].
What did you see instead?
After AZ recovery, I saw 3 evictions.
The text was updated successfully, but these errors were encountered: