You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Attempting to use a launch template with minimum values set like ami, subnets, and instance profile. Expecting an instance to launch.
Actual Behavior
Karpenter attempts to launch instances which never progresses to running. In the EC2 console it shows these instances going from pending > shutting down > terminated
Console logs this error
2022-03-16T16:31:41.054Z ERROR controller.provisioning Could not launch node, launching instances, with fleet error(s), InvalidParameterValue: 'karpenter.sh/provisioner-name' is not a valid tag key. Tag keys must match pattern ([0-9a-zA-Z\\-_+=,.@:]{1,255}), and must not be a reserved name ('.', '..', '_index') {"commit": "82ea63b", "provisioner": "default"}
Steps to Reproduce the Problem
It appears all I need to do is use a launch template and this error appears.
We're using helm to install karpenter in it's own namespace. We set debug logging, pass in a service account role arn, and specify the clusterName and clusterEndpoint. Everything else is defaults. We are running karpenter on fargate.
2022-03-16T17:22:06.653Z INFO controller.provisioning Batched 2 pods in 1.000503605s {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:06.661Z DEBUG controller.provisioning Excluding instance type t4g.micro because there are not enough resources for daemons {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:06.662Z DEBUG controller.provisioning Excluding instance type t3a.nano because there are not enough resources for kubelet and system overhead {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:06.664Z DEBUG controller.provisioning Excluding instance type t4g.nano because there are not enough resources for kubelet and system overhead {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:06.664Z DEBUG controller.provisioning Excluding instance type t3.micro because there are not enough resources for daemons {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:06.665Z DEBUG controller.provisioning Excluding instance type t3.nano because there are not enough resources for kubelet and system overhead {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:06.668Z DEBUG controller.provisioning Excluding instance type t3a.micro because there are not enough resources for daemons {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:06.675Z INFO controller.provisioning Computed packing of 1 node(s) for 2 pod(s) with instance type option(s) [c3.xlarge c4.xlarge g5g.xlarge c5d.xlarge c6gn.xlarge c6a.xlarge c5ad.xlarge c5.xlarge c6g.xlarge c5a.xlarge c6gd.xlarge a1.xlarge c6i.xlarge c5n.xlarge m1.xlarge m3.xlarge g5.xlarge m5ad.xlarge m5d.xlarge] {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:40.991Z DEBUG controller.provisioning Discovered 373 EC2 instance types {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:41.051Z DEBUG controller.provisioning Discovered subnets: [subnet-x (us-east-1d) subnet-xx (us-east-1b) subnet-xxx (us-east-1a) subnet-x (us-east-1e) subnet-x (us-east-1f) subnet-x (us-east-1c)] {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:41.093Z DEBUG controller.provisioning Discovered 373 EC2 instance types {"commit": "4b14787", "provisioner": "on-demand"}
2022-03-16T17:22:41.215Z DEBUG controller.provisioning Discovered 373 EC2 instance types {"commit": "4b14787", "provisioner": "high-network"}
2022-03-16T17:22:41.329Z DEBUG controller.provisioning Discovered EC2 instance types zonal offerings {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:41.372Z DEBUG controller.provisioning Discovered EC2 instance types zonal offerings {"commit": "4b14787", "provisioner": "on-demand"}
2022-03-16T17:22:41.441Z DEBUG controller.provisioning Discovered EC2 instance types zonal offerings {"commit": "4b14787", "provisioner": "high-network"}
2022-03-16T17:22:46.354Z ERROR controller.provisioning Could not launch node, launching instances, with fleet error(s), InvalidParameterValue: 'karpenter.sh/provisioner-name' is not a valid tag key. Tag keys must match pattern ([0-9a-zA-Z\\-_+=,.@:]{1,255}), and must not be a reserved name ('.', '..', '_index'); UnfulfillableCapacity: Unable to fulfill capacity due to your request configuration. Please adjust your request and try again. {"commit": "4b14787", "provisioner": "default"}
The text was updated successfully, but these errors were encountered:
@jhughes-mc can you please provide the launch template? I'm not able to reproduce the problem without it.
Also you may want to look into a similar issue here where adding instance_metadata_tags = "disabled" (corresponding to InstanceMetadataTags) fixed the problem for some people.
Version
Karpenter: v0.7.1 and v0.6.4
Kubernetes: v1.21.5
Expected Behavior
Attempting to use a launch template with minimum values set like ami, subnets, and instance profile. Expecting an instance to launch.
Actual Behavior
Karpenter attempts to launch instances which never progresses to running. In the EC2 console it shows these instances going from pending > shutting down > terminated
Console logs this error
Steps to Reproduce the Problem
It appears all I need to do is use a launch template and this error appears.
We're using helm to install karpenter in it's own namespace. We set debug logging, pass in a service account role arn, and specify the clusterName and clusterEndpoint. Everything else is defaults. We are running karpenter on fargate.
Here is the provisioner configuration
Resource Specs and Logs
The text was updated successfully, but these errors were encountered: