Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid tag key 'karpenter.sh/provisioner-name' #1527

Closed
jhughes-mc opened this issue Mar 16, 2022 · 2 comments
Closed

Invalid tag key 'karpenter.sh/provisioner-name' #1527

jhughes-mc opened this issue Mar 16, 2022 · 2 comments
Assignees
Labels

Comments

@jhughes-mc
Copy link

jhughes-mc commented Mar 16, 2022

Version

Karpenter: v0.7.1 and v0.6.4
Kubernetes: v1.21.5

Expected Behavior

Attempting to use a launch template with minimum values set like ami, subnets, and instance profile. Expecting an instance to launch.

Actual Behavior

Karpenter attempts to launch instances which never progresses to running. In the EC2 console it shows these instances going from pending > shutting down > terminated
Console logs this error

2022-03-16T16:31:41.054Z ERROR controller.provisioning Could not launch node, launching instances, with fleet error(s), InvalidParameterValue: 'karpenter.sh/provisioner-name' is not a valid tag key. Tag keys must match pattern ([0-9a-zA-Z\\-_+=,.@:]{1,255}), and must not be a reserved name ('.', '..', '_index') {"commit": "82ea63b", "provisioner": "default"}

Steps to Reproduce the Problem

It appears all I need to do is use a launch template and this error appears.

We're using helm to install karpenter in it's own namespace. We set debug logging, pass in a service account role arn, and specify the clusterName and clusterEndpoint. Everything else is defaults. We are running karpenter on fargate.

Here is the provisioner configuration

apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: default
spec:
  kubeletConfiguration: {}
  limits: {}
  provider:
    launchTemplate: eks121
    apiVersion: extensions.karpenter.sh/v1alpha1
    kind: AWS
    subnetSelector:
      Name: "*app*"
    tags:
      holler:role: eks
      holler:function: app
      holler:datacenter: foo00
      holler:environment: dev
      holler:product: holler
      Name: foobar
      kubernetes.io/cluster/foobar: owned
  requirements:
    - key: karpenter.sh/capacity-type
      operator: In
      values:
        - spot
        - on-demand
    - key: kubernetes.io/arch
      operator: In
      values:
        - amd64
  ttlSecondsAfterEmpty: 600

Resource Specs and Logs

2022-03-16T17:22:06.653Z INFO controller.provisioning Batched 2 pods in 1.000503605s {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:06.661Z DEBUG controller.provisioning Excluding instance type t4g.micro because there are not enough resources for daemons {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:06.662Z DEBUG controller.provisioning Excluding instance type t3a.nano because there are not enough resources for kubelet and system overhead {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:06.664Z DEBUG controller.provisioning Excluding instance type t4g.nano because there are not enough resources for kubelet and system overhead {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:06.664Z DEBUG controller.provisioning Excluding instance type t3.micro because there are not enough resources for daemons {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:06.665Z DEBUG controller.provisioning Excluding instance type t3.nano because there are not enough resources for kubelet and system overhead {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:06.668Z DEBUG controller.provisioning Excluding instance type t3a.micro because there are not enough resources for daemons {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:06.675Z INFO controller.provisioning Computed packing of 1 node(s) for 2 pod(s) with instance type option(s) [c3.xlarge c4.xlarge g5g.xlarge c5d.xlarge c6gn.xlarge c6a.xlarge c5ad.xlarge c5.xlarge c6g.xlarge c5a.xlarge c6gd.xlarge a1.xlarge c6i.xlarge c5n.xlarge m1.xlarge m3.xlarge g5.xlarge m5ad.xlarge m5d.xlarge] {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:40.991Z DEBUG controller.provisioning Discovered 373 EC2 instance types {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:41.051Z DEBUG controller.provisioning Discovered subnets: [subnet-x (us-east-1d) subnet-xx (us-east-1b) subnet-xxx (us-east-1a) subnet-x (us-east-1e) subnet-x (us-east-1f) subnet-x (us-east-1c)] {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:41.093Z DEBUG controller.provisioning Discovered 373 EC2 instance types {"commit": "4b14787", "provisioner": "on-demand"}
2022-03-16T17:22:41.215Z DEBUG controller.provisioning Discovered 373 EC2 instance types {"commit": "4b14787", "provisioner": "high-network"}
2022-03-16T17:22:41.329Z DEBUG controller.provisioning Discovered EC2 instance types zonal offerings {"commit": "4b14787", "provisioner": "default"}
2022-03-16T17:22:41.372Z DEBUG controller.provisioning Discovered EC2 instance types zonal offerings {"commit": "4b14787", "provisioner": "on-demand"}
2022-03-16T17:22:41.441Z DEBUG controller.provisioning Discovered EC2 instance types zonal offerings {"commit": "4b14787", "provisioner": "high-network"}
2022-03-16T17:22:46.354Z ERROR controller.provisioning Could not launch node, launching instances, with fleet error(s), InvalidParameterValue: 'karpenter.sh/provisioner-name' is not a valid tag key. Tag keys must match pattern ([0-9a-zA-Z\\-_+=,.@:]{1,255}), and must not be a reserved name ('.', '..', '_index'); UnfulfillableCapacity: Unable to fulfill capacity due to your request configuration. Please adjust your request and try again. {"commit": "4b14787", "provisioner": "default"}
@jhughes-mc jhughes-mc added the bug Something isn't working label Mar 16, 2022
@spring1843
Copy link
Contributor

spring1843 commented Mar 17, 2022

@jhughes-mc can you please provide the launch template? I'm not able to reproduce the problem without it.

Also you may want to look into a similar issue here where adding instance_metadata_tags = "disabled" (corresponding to InstanceMetadataTags) fixed the problem for some people.

@github-actions
Copy link
Contributor

Labeled for closure due to inactivity in 10 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants