-
Notifications
You must be signed in to change notification settings - Fork 669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RemovePodsViolatingTopologySpreadConstraint includeSoftConstraints: true not evicting #564
Comments
Hi, |
Thanks. I will wait for the next release. |
I was able to compile it from master , but I am still getting the same behaviour. Descheduler version {Major:0 Minor:20+ GitVersion:v20210509-v0.20.0-110-g2a3529c54 GitBranch:master GitSha1:2a3529c5431c528c1e727caffb2485145b259f4d BuildDate:2021-05-09T19:09:14-0300 GoVersion:go1.16.4 Compiler:gc Platform:linux/amd64} |
Thanks for taking the time to test that on master @jsalatiel |
@jsalatiel, sorry, I had a mistake in the test which was causing it to fail. I updated the test in order to match your scenario and it is passing. The test does the following:
When I add logging, I see that the pod evicted is from |
If you could also provide the Descheduler logs from when you ran it in master (with |
Hi @damemi , after enabling debug I noticed that the problem is because all my pods mount /etc/localtime from the host. I0510 19:07:59.089224 1 evictions.go:260] "Pod lacks an eviction annotation and fails the following checks" pod="default/whoami-65fdc87f6d-c5n2t" checks="pod has local storage and descheduler is not configured with evictLocalStoragePods" Thanks for the help and sorry for taking you time. |
No problem! Glad we solved it :) |
What version of descheduler are you using?
descheduler version
Does this issue reproduce with the latest release?
Yes.
Please provide a copy of your descheduler policy config file
What k8s version are you using (
kubectl version
)?kubectl version
OutputWhat did you do?
I have created a deployment with the following topologySpread
I have 6 nodes in 2 zones. 3 nodes per zone. The deployment has 4 instances.
At start the topologySpreadConstraints split the pods even in the zones. 2 pods per zone in 4 different nodes.
If I put all nodes from zone 1 in drain mode, since the policy for zones is ScheduleAnyway, one of the pods will be schedule in zone2 and the other will be pending ( due DoNotSchedule policy for same node ).
After uncordon the nodes from zone1, the pending node is immediately created as expected.
So now I have:
3 pods running on zone 2
1 pod running on zone1
thus maxSkew > 1
What did you expect to see?
If I run descheduler job, I expect to see one of the pods from zone2 be rescheduler to zone1.
What did you see instead?
Nothing happens. Logs from descheduler does not show anything related to ScheduleAnyway constraint. Isn't there what includeSoftContraint should do?
The text was updated successfully, but these errors were encountered: