-
Notifications
You must be signed in to change notification settings - Fork 669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Descheduler | Log Data Evicted Pod Totals - Specific Strategy & Total of all Strategies #501
Comments
/cc @ingvagabund |
Yes, that's correct. We might extend the pod evictor to also collect "metrics" for individual strategies and runs and report smth like:
|
For now, I opened #502 to move this log message out of any strategies and into the main loop |
I think the main issue here was closed, and opened a new one to track possibly improving podEvictor #503 |
Is your feature request related to a problem? Please describe.
Evicted pods - While testing the descheduler in an EKS cluster running stateful pods with a node affinity of "preferredDuringSchedulingIgnoredDuringExecution"
We had the following evictions early in the log data for LowNodeUtilization which fine, so 2 pods evicted in this case:
But later in the log data we then see an eviction number mentioned for the strategy RemovePodsViolatingNodeAffinity, again it states that there are 2 evictions:
But from looking at the log above there does NOT seem to be an eviction mentioned in the RemovePodsViolatingNodeAffinity (node_affinity), with "fits on node"
and the fact we're NOT using the affinity "requiredDuringSchedulingIgnoredDuringExecution"
So why does it mention the last line
node_affinity.go:107] Evicted 2 pods
?Is it adding the number already evicted from the previous strategy LowNodeUtilization for a final total?
Describe the solution you'd like
If we could have something along the lines of a eviction total for:
As mentioned by @damemi on slack:
"Node affinity does only check requiredDuringSchedulingIgnoredDuringExecution, but I believe all strategies share a common podEvictor, which may be logging the total number of pods evicted across all strategies."
Describe alternatives you've considered
N/A
What version of descheduler are you using?
descheduler version: 0.19.0
Additional context
As discussed on slack - https://kubernetes.slack.com/archives/C09TP78DV/p1613593842089500
The text was updated successfully, but these errors were encountered: