-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] - AWS EKS user and worker node groups fail to scale to 0 nodes #1811
Comments
I think the cluster autoscaler won't scale down if a pod from a deployment is scheduled on it. AWS EKS includes a few add-ons by default (specifically the You can't disable these default add-ons as far as I can tell. You can patch the deployment after they are deployed which may be a solution. As a workaround, adding the following to the problematic deployment manifests can resolve the problem after deployment.
|
We could patch the problematic add-on deployments in the nebari/template/stages/03-kubernetes-initialize module |
@Adam-D-Lewis is this a priority for you? Should we assign someone to work on this? |
Not a priority because of the workaround |
Hi @Adam-D-Lewis, I assume #2353 fixed this issue right? can we mark this as completed? |
Description
AWS EKS user and worker node groups fail to scale to 0 nodes
The text was updated successfully, but these errors were encountered: