You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
I'm running kubernetes 1.17 and use an autoscaling group that has a mix of OnDemand and spot instances that both use the amazon-eks-node-1.14-v20201007 AMI. Some of the spot instances have a SchedulingDisabled status, which apparently indicates a node has been cordoned off, but I am certain that nobody has done this:
What you expected to happen:
I expect the nodes to have a Ready status.
How to reproduce it (as minimally and precisely as possible):
On a kubernetes 1.18 EKS cluster, launch a worker node using the amazon-eks-node-1.14-v20201007 AMI, with the following user-data:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeNotSchedulable 58m kubelet Node ip-10-2-34-223.ap-southeast-2.compute.internal status is now: NodeNotSchedulable
I use cluster-autoscaler, but there's nothing about it in the logs:
...
I1112 07:43:48.203376 1 scale_down.go:421] Node ip-10-2-34-223.ap-southeast-2.compute.internal - cpu utilization 0.924870
I1112 07:43:48.203389 1 scale_down.go:424] Node ip-10-2-34-223.ap-southeast-2.compute.internal is not suitable for removal - cpu utilization too big (0.924870)
...
Hi @rtripat. Thanks for your reply. I've isolated this problem to cluster-autoscaler. So I will close this issue and open it up in the cluster-autoscaler repository. Cheers.
What happened:
I'm running kubernetes 1.17 and use an autoscaling group that has a mix of OnDemand and spot instances that both use the
amazon-eks-node-1.14-v20201007
AMI. Some of the spot instances have aSchedulingDisabled
status, which apparently indicates a node has been cordoned off, but I am certain that nobody has done this:What you expected to happen:
I expect the nodes to have a
Ready
status.How to reproduce it (as minimally and precisely as possible):
On a kubernetes 1.18 EKS cluster, launch a worker node using the
amazon-eks-node-1.14-v20201007
AMI, with the following user-data:Anything else we need to know?:
cluster-autoscaler
, but there's nothing about it in the logs:Environment:
aws eks describe-cluster --name <name> --query cluster.platformVersion
): eks.2aws eks describe-cluster --name <name> --query cluster.version
): 1.17uname -a
): Linux ip-10-2-34-223.ap-southeast-2.compute.internal 4.14.198-152.320.amzn2.x86_64 Template is missing source_ami_id in the variables section #1 SMP Wed Sep 23 23:57:28 UTC 2020 x86_64 x86_64 x86_64 GNU/Linuxcat /etc/eks/release
on a node):The text was updated successfully, but these errors were encountered: