-
Notifications
You must be signed in to change notification settings - Fork 964
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Karpenter ignores extra labels coming from kubelet-extra-args in launchtemplate #1681
Comments
@yerenkow Do you use this tag to force pods to schedule to particular zones? |
What we want is karpenter to pick up any availability zone, best suitable at the moment, and not specify zones in the pods requirements. |
Does the label show up on your node eventually? When the kubelet finally comes up, it should report the |
Once node is ready, aws-node already completed setup (and failed to properly init networking), pod with payload is assigned and failed b/c of no network. |
@yerenkow We've looked into this, but for now it seems like the only solution is to use a provisioner per custom label as you had suggested earlier. It appears that since we create the Node object initially, when kubelet starts up it will only merge in a fixed set of label names. |
Labeled for closure due to inactivity in 10 days. |
See the same issue also when using Karpenter with ENIConfig, what we like to see is for Karpenter appends labels in provisioner to the existing labels (created from LT userdata), instead of replacing them. |
Labeled for closure due to inactivity in 10 days. |
Maybe one option would be to allow for the availability zone to be looked up by karpenter and templated into label values which are set in a provisioner. For example, something like this: apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: default
spec:
labels:
eniconfig: my-custom-prefix-{{ availability-zone }} This way the label could be set using the already functional labels element in the provisioner API, but it would allow for the dynamic availability zone to be included in the label value without having 1 provisioner per AZ. |
I'm missing the issue here. What's wrong with setting these special custom labels in the user data? They do eventually propagate to the node, right? I'm wary of special casing this due to the complexity involved and the fact that we haven't seen this issue beyond this special case. |
Hello @ellistarn |
The challenge is that we can't know about the dynamic labels before the node comes online. We have well known labels to support this type of use case: https://karpenter.sh/v0.18.1/aws/instance-types/. Perhaps we need to expose storage type (ebs, instance-store, etc). I'm not sure what the alternative is from an implementation perspective. |
Nice @jonathan-innis, I'll test v0.28.0, and I'll let you know. |
Hello @jonathan-innis I confirm that the latest release v0.28.0 fix that issue. |
Version
Karpenter: v0.8.2
Kubernetes: v1.21
Expected Behavior
Node should set labels coming from extra kubelet args (in launch template user data) early, so daemonsets should see it.
Actual Behavior
It doesn't work, as aws-node pod doesn't see label.
Steps to Reproduce the Problem
We use launch template with certain customization done in the user-data script. This results that every node getting extra label, in our case - this label is based on the current AZ. Basically, we need certain network customization in every AZ, more precisely, for every subnet in every AZ, thus we have several different eni configs. Here's the sample userdata:
--kubelet-extra-args "--node-labels=k8s.amazonaws.com/eniConfig=eni-config-$SUFFIX"
Then during provision of new node, check logs of aws-node pod. It should contain proper config name, coming from node labels.
Resource Specs and Logs
Here's the sample logs with wrong behavior:
{"level":"info","ts":"2022-03-23T00:27:37.668Z","caller":"ipamd/ipamd.go:795","msg":"Found ENI Config Name: default"}
{"level":"error","ts":"2022-03-23T00:27:37.769Z","caller":"ipamd/ipamd.go:795","msg":"error while retrieving eniconfig: ENIConfig.crd.k8s.amazonaws.com "default" not found"}
{"level":"error","ts":"2022-03-23T00:27:37.769Z","caller":"ipamd/ipamd.go:769","msg":"Failed to get pod ENI config"}
Sample expected logs:
{"level":"info","ts":"2022-04-13T17:51:01.444Z","caller":"ipamd/ipamd.go:795","msg":"Found ENI Config Name: eni-config-customized-1"}
Note that if I put that label into provisioner, then it can be discovered just fine. But, unforunately, then I'd have to create as many provisioners as I have subnets. Is that right behavior?
The text was updated successfully, but these errors were encountered: