Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] "vpc-cni" addon doesn't install correctly #8035

Open
myspotontheweb opened this issue Nov 16, 2024 · 2 comments
Open

[Bug] "vpc-cni" addon doesn't install correctly #8035

myspotontheweb opened this issue Nov 16, 2024 · 2 comments
Labels

Comments

@myspotontheweb
Copy link

myspotontheweb commented Nov 16, 2024

What were you trying to accomplish?

Create an EKS cluster that uses a fargate profile

Ran the following command and expected it to complete without errors or warning messages

eksctl create cluster --name test --version 1.31 --region eu-west-1 --fargate

What happened?

Looking at the output we can see the warning message stating problems installing the "vpn-cni" add-on

2024-11-16 19:41:34 [ℹ]  eksctl version 0.194.0
2024-11-16 19:41:34 [ℹ]  using region eu-west-1
2024-11-16 19:41:34 [ℹ]  setting availability zones to [eu-west-1b eu-west-1a eu-west-1c]
2024-11-16 19:41:34 [ℹ]  subnets for eu-west-1b - public:192.168.0.0/19 private:192.168.96.0/19
2024-11-16 19:41:34 [ℹ]  subnets for eu-west-1a - public:192.168.32.0/19 private:192.168.128.0/19
2024-11-16 19:41:34 [ℹ]  subnets for eu-west-1c - public:192.168.64.0/19 private:192.168.160.0/19
2024-11-16 19:41:34 [ℹ]  using Kubernetes version 1.31
2024-11-16 19:41:34 [ℹ]  creating EKS cluster "test" in "eu-west-1" region with Fargate profile
2024-11-16 19:41:34 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --cluster=test'
2024-11-16 19:41:34 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "test" in "eu-west-1"
2024-11-16 19:41:34 [ℹ]  CloudWatch logging will not be enabled for cluster "test" in "eu-west-1"
2024-11-16 19:41:34 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=eu-west-1 --cluster=test'
2024-11-16 19:41:34 [ℹ]  default addons vpc-cni, kube-proxy, coredns were not specified, will install them as EKS addons
2024-11-16 19:41:34 [ℹ]  
2 sequential tasks: { create cluster control plane "test", 
    3 sequential sub-tasks: { 
        1 task: { create addons },
        wait for control plane to become ready,
        create fargate profiles,
    } 
}
2024-11-16 19:41:34 [ℹ]  building cluster stack "eksctl-test-cluster"
2024-11-16 19:41:35 [ℹ]  deploying stack "eksctl-test-cluster"
2024-11-16 19:42:05 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2024-11-16 19:42:35 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2024-11-16 19:43:35 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2024-11-16 19:44:35 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2024-11-16 19:45:35 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2024-11-16 19:46:35 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2024-11-16 19:47:36 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2024-11-16 19:48:36 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2024-11-16 19:48:37 [!]  recommended policies were found for "vpc-cni" addon, but since OIDC is disabled on the cluster, eksctl cannot configure the requested permissions; the recommended way to provide IAM permissions for "vpc-cni" addon is via pod identity associations; after addon creation is completed, add all recommended policies to the config file, under `addon.PodIdentityAssociations`, and run `eksctl update addon`
2024-11-16 19:48:37 [ℹ]  creating addon
2024-11-16 19:48:37 [ℹ]  successfully created addon
2024-11-16 19:48:38 [ℹ]  creating addon
2024-11-16 19:48:38 [ℹ]  successfully created addon
2024-11-16 19:48:38 [ℹ]  creating addon
2024-11-16 19:48:39 [ℹ]  successfully created addon
2024-11-16 19:50:40 [ℹ]  creating Fargate profile "fp-default" on EKS cluster "test"
2024-11-16 19:54:58 [ℹ]  created Fargate profile "fp-default" on EKS cluster "test"
2024-11-16 19:55:28 [ℹ]  "coredns" is now schedulable onto Fargate
2024-11-16 19:56:31 [ℹ]  "coredns" is now scheduled onto Fargate
2024-11-16 19:56:31 [ℹ]  "coredns" pods are now scheduled onto Fargate
2024-11-16 19:56:31 [ℹ]  waiting for the control plane to become ready
2024-11-16 19:56:32 [✔]  saved kubeconfig as "/home/mark/.kube/config"
2024-11-16 19:56:32 [ℹ]  no tasks
2024-11-16 19:56:32 [✔]  all EKS cluster resources for "test" have been created
2024-11-16 19:56:32 [✔]  created 0 nodegroup(s) in cluster "test"
2024-11-16 19:56:32 [✔]  created 0 managed nodegroup(s) in cluster "test"
2024-11-16 19:56:33 [ℹ]  kubectl command should work with "/home/mark/.kube/config", try 'kubectl get nodes'
2024-11-16 19:56:33 [✔]  EKS cluster "test" in "eu-west-1" region is ready

The snippet of concern

2024-11-16 19:48:37 [!]  recommended policies were found for "vpc-cni" addon, 
but since OIDC is disabled on the cluster, 
eksctl cannot configure the requested permissions; 
the recommended way to provide IAM permissions for "vpc-cni" addon is via pod identity associations; 
after addon creation is completed, add all recommended policies to the config file, 
under `addon.PodIdentityAssociations`, 
and run `eksctl update addon`

How to reproduce it?

Method 2 - Using a config file

The issue can alternatively be reproduced using a config file

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: test1
  region: eu-west-1
  version: "1.31"

fargateProfiles:
- name: fp-default
  selectors:
  - namespace: default
  - namespace: kube-system

With the following output

$ eksctl create cluster -f test1.yaml
2024-11-16 19:21:10 [ℹ]  eksctl version 0.194.0
2024-11-16 19:21:10 [ℹ]  using region eu-west-1
2024-11-16 19:21:10 [ℹ]  setting availability zones to [eu-west-1b eu-west-1a eu-west-1c]
2024-11-16 19:21:10 [ℹ]  subnets for eu-west-1b - public:192.168.0.0/19 private:192.168.96.0/19
2024-11-16 19:21:10 [ℹ]  subnets for eu-west-1a - public:192.168.32.0/19 private:192.168.128.0/19
2024-11-16 19:21:10 [ℹ]  subnets for eu-west-1c - public:192.168.64.0/19 private:192.168.160.0/19
2024-11-16 19:21:10 [ℹ]  using Kubernetes version 1.31
2024-11-16 19:21:10 [ℹ]  creating EKS cluster "test1" in "eu-west-1" region with Fargate profile
2024-11-16 19:21:10 [ℹ]  will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2024-11-16 19:21:10 [ℹ]  will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s)
2024-11-16 19:21:10 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --cluster=test1'
2024-11-16 19:21:10 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "test1" in "eu-west-1"
2024-11-16 19:21:10 [ℹ]  CloudWatch logging will not be enabled for cluster "test1" in "eu-west-1"
2024-11-16 19:21:10 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=eu-west-1 --cluster=test1'
2024-11-16 19:21:10 [ℹ]  default addons kube-proxy, coredns, vpc-cni were not specified, will install them as EKS addons
2024-11-16 19:21:10 [ℹ]  
2 sequential tasks: { create cluster control plane "test1", 
    3 sequential sub-tasks: { 
        1 task: { create addons },
        wait for control plane to become ready,
        create fargate profiles,
    } 
}
2024-11-16 19:21:10 [ℹ]  building cluster stack "eksctl-test1-cluster"
2024-11-16 19:21:11 [ℹ]  deploying stack "eksctl-test1-cluster"
2024-11-16 19:21:41 [ℹ]  waiting for CloudFormation stack "eksctl-test1-cluster"
2024-11-16 19:22:11 [ℹ]  waiting for CloudFormation stack "eksctl-test1-cluster"
2024-11-16 19:23:11 [ℹ]  waiting for CloudFormation stack "eksctl-test1-cluster"
2024-11-16 19:24:11 [ℹ]  waiting for CloudFormation stack "eksctl-test1-cluster"
2024-11-16 19:25:11 [ℹ]  waiting for CloudFormation stack "eksctl-test1-cluster"
2024-11-16 19:26:12 [ℹ]  waiting for CloudFormation stack "eksctl-test1-cluster"
2024-11-16 19:27:12 [ℹ]  waiting for CloudFormation stack "eksctl-test1-cluster"
2024-11-16 19:28:12 [ℹ]  waiting for CloudFormation stack "eksctl-test1-cluster"
2024-11-16 19:29:12 [ℹ]  waiting for CloudFormation stack "eksctl-test1-cluster"
2024-11-16 19:29:14 [ℹ]  creating addon
2024-11-16 19:29:14 [ℹ]  successfully created addon
2024-11-16 19:29:14 [ℹ]  creating addon
2024-11-16 19:29:15 [ℹ]  successfully created addon
2024-11-16 19:29:15 [!]  recommended policies were found for "vpc-cni" addon, but since OIDC is disabled on the cluster, eksctl cannot configure the requested permissions; the recommended way to provide IAM permissions for "vpc-cni" addon is via pod identity associations; after addon creation is completed, add all recommended policies to the config file, under `addon.PodIdentityAssociations`, and run `eksctl update addon`
2024-11-16 19:29:15 [ℹ]  creating addon
2024-11-16 19:29:15 [ℹ]  successfully created addon
2024-11-16 19:31:18 [ℹ]  creating Fargate profile "fp-default" on EKS cluster "test1"
2024-11-16 19:33:28 [ℹ]  created Fargate profile "fp-default" on EKS cluster "test1"
2024-11-16 19:33:58 [ℹ]  "coredns" is now schedulable onto Fargate
2024-11-16 19:35:01 [ℹ]  "coredns" is now scheduled onto Fargate
2024-11-16 19:35:01 [ℹ]  "coredns" pods are now scheduled onto Fargate
2024-11-16 19:35:01 [ℹ]  waiting for the control plane to become ready
2024-11-16 19:35:02 [✔]  saved kubeconfig as "/home/mark/.kube/config"
2024-11-16 19:35:02 [ℹ]  no tasks
2024-11-16 19:35:02 [✔]  all EKS cluster resources for "test1" have been created
2024-11-16 19:35:02 [✔]  created 0 nodegroup(s) in cluster "test1"
2024-11-16 19:35:02 [✔]  created 0 managed nodegroup(s) in cluster "test1"
2024-11-16 19:35:03 [ℹ]  kubectl command should work with "/home/mark/.kube/config", try 'kubectl get nodes'
2024-11-16 19:35:03 [✔]  EKS cluster "test1" in "eu-west-1" region is ready

Logs

Anything else we need to know?

1/
What OS are you using?

$ cat /etc/os-release
PRETTY_NAME="Ubuntu 24.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04.1 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo

2/
Are you using a downloaded binary or did you compile eksctl?

Downloaded binary

3/
What type of AWS credentials are you using (i.e. default/named profile, MFA)? - please don't include actual credentials though!

Setting the following env variables

  • AWS_SECRET_ACCESS_KEY
  • AWS_ACCESS_KEY_ID
  • AWS_SESSION_TOKEN

Versions

$ eksctl info
eksctl version: 0.194.0
kubectl version: v1.31.2
OS: linux
$ aws --version
aws-cli/2.21.3 Python/3.12.6 Linux/6.8.0-48-generic exe/x86_64.ubuntu.24
Copy link
Contributor

Hello myspotontheweb 👋 Thank you for opening an issue in eksctl project. The team will review the issue and aim to respond within 1-5 business days. Meanwhile, please read about the Contribution and Code of Conduct guidelines here. You can find out more information about eksctl on our website

@myspotontheweb
Copy link
Author

myspotontheweb commented Nov 16, 2024

Analysis

I suspect this is an issue is related to a recent change in eksctl, where 3 addons are now being automatically installed.

When a cluster is created, EKS automatically installs VPC CNI, CoreDNS and kube-proxy as self-managed addons

Related docs:

Work-around: 1

I discovered a work-around, but it requires the use of a configuration file. Note the addition of "autoApplyPodIdentityAssociations" at the end:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: test2
  region: eu-west-1
  version: "1.31"

fargateProfiles:
- name: fp-default
  selectors:
  - namespace: default
  - namespace: kube-system

addonsConfig:
  autoApplyPodIdentityAssociations: true

Applied as follows

$ eksctl create cluster -f test2.yaml
2024-11-16 19:22:23 [ℹ]  eksctl version 0.194.0
2024-11-16 19:22:23 [ℹ]  using region eu-west-1
2024-11-16 19:22:23 [ℹ]  setting availability zones to [eu-west-1b eu-west-1a eu-west-1c]
2024-11-16 19:22:23 [ℹ]  subnets for eu-west-1b - public:192.168.0.0/19 private:192.168.96.0/19
2024-11-16 19:22:23 [ℹ]  subnets for eu-west-1a - public:192.168.32.0/19 private:192.168.128.0/19
2024-11-16 19:22:23 [ℹ]  subnets for eu-west-1c - public:192.168.64.0/19 private:192.168.160.0/19
2024-11-16 19:22:23 [ℹ]  using Kubernetes version 1.31
2024-11-16 19:22:23 [ℹ]  creating EKS cluster "test2" in "eu-west-1" region with Fargate profile
2024-11-16 19:22:23 [ℹ]  will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2024-11-16 19:22:23 [ℹ]  will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s)
2024-11-16 19:22:23 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --cluster=test2'
2024-11-16 19:22:23 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "test2" in "eu-west-1"
2024-11-16 19:22:23 [ℹ]  CloudWatch logging will not be enabled for cluster "test2" in "eu-west-1"
2024-11-16 19:22:23 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=eu-west-1 --cluster=test2'
2024-11-16 19:22:23 [ℹ]  default addons vpc-cni, kube-proxy, coredns were not specified, will install them as EKS addons
2024-11-16 19:22:23 [ℹ]  
2 sequential tasks: { create cluster control plane "test2", 
    3 sequential sub-tasks: { 
        1 task: { create addons },
        wait for control plane to become ready,
        create fargate profiles,
    } 
}
2024-11-16 19:22:23 [ℹ]  building cluster stack "eksctl-test2-cluster"
2024-11-16 19:22:24 [ℹ]  deploying stack "eksctl-test2-cluster"
2024-11-16 19:22:54 [ℹ]  waiting for CloudFormation stack "eksctl-test2-cluster"
2024-11-16 19:23:24 [ℹ]  waiting for CloudFormation stack "eksctl-test2-cluster"
2024-11-16 19:24:24 [ℹ]  waiting for CloudFormation stack "eksctl-test2-cluster"
2024-11-16 19:25:24 [ℹ]  waiting for CloudFormation stack "eksctl-test2-cluster"
2024-11-16 19:26:24 [ℹ]  waiting for CloudFormation stack "eksctl-test2-cluster"
2024-11-16 19:27:24 [ℹ]  waiting for CloudFormation stack "eksctl-test2-cluster"
2024-11-16 19:28:25 [ℹ]  waiting for CloudFormation stack "eksctl-test2-cluster"
2024-11-16 19:29:25 [ℹ]  waiting for CloudFormation stack "eksctl-test2-cluster"
2024-11-16 19:30:25 [ℹ]  waiting for CloudFormation stack "eksctl-test2-cluster"
2024-11-16 19:30:26 [ℹ]  "addonsConfig.autoApplyPodIdentityAssociations" is set to true; will lookup recommended pod identity configuration for "vpc-cni" addon
2024-11-16 19:30:27 [ℹ]  deploying stack "eksctl-test2-addon-vpc-cni-podidentityrole-aws-node"
2024-11-16 19:30:27 [ℹ]  waiting for CloudFormation stack "eksctl-test2-addon-vpc-cni-podidentityrole-aws-node"
2024-11-16 19:31:02 [ℹ]  waiting for CloudFormation stack "eksctl-test2-addon-vpc-cni-podidentityrole-aws-node"
2024-11-16 19:31:02 [ℹ]  creating addon
2024-11-16 19:31:03 [ℹ]  successfully created addon
2024-11-16 19:31:03 [ℹ]  creating addon
2024-11-16 19:31:04 [ℹ]  successfully created addon
2024-11-16 19:31:04 [ℹ]  creating addon
2024-11-16 19:31:04 [ℹ]  successfully created addon
2024-11-16 19:33:07 [ℹ]  creating Fargate profile "fp-default" on EKS cluster "test2"
2024-11-16 19:35:17 [ℹ]  created Fargate profile "fp-default" on EKS cluster "test2"
2024-11-16 19:35:47 [ℹ]  "coredns" is now schedulable onto Fargate
2024-11-16 19:36:50 [ℹ]  "coredns" is now scheduled onto Fargate
2024-11-16 19:36:50 [ℹ]  "coredns" pods are now scheduled onto Fargate
2024-11-16 19:36:50 [ℹ]  waiting for the control plane to become ready
2024-11-16 19:36:51 [✔]  saved kubeconfig as "/home/mark/.kube/config"
2024-11-16 19:36:51 [ℹ]  no tasks
2024-11-16 19:36:51 [✔]  all EKS cluster resources for "test2" have been created
2024-11-16 19:36:51 [✔]  created 0 nodegroup(s) in cluster "test2"
2024-11-16 19:36:51 [✔]  created 0 managed nodegroup(s) in cluster "test2"
2024-11-16 19:36:52 [ℹ]  kubectl command should work with "/home/mark/.kube/config", try 'kubectl get nodes'
2024-11-16 19:36:52 [✔]  EKS cluster "test2" in "eu-west-1" region is ready

Note the following snippet which now demonstrates the addon is being configured correctly.

2024-11-16 19:30:26 [ℹ]  "addonsConfig.autoApplyPodIdentityAssociations" is set to true; will lookup recommended pod identity configuration for "vpc-cni" addon
2024-11-16 19:30:27 [ℹ]  deploying stack "eksctl-test2-addon-vpc-cni-podidentityrole-aws-node"
2024-11-16 19:30:27 [ℹ]  waiting for CloudFormation stack "eksctl-test2-addon-vpc-cni-podidentityrole-aws-node"
2024-11-16 19:31:02 [ℹ]  waiting for CloudFormation stack "eksctl-test2-addon-vpc-cni-podidentityrole-aws-node"
2024-11-16 19:31:02 [ℹ]  creating addon
2024-11-16 19:31:03 [ℹ]  successfully created addon

Work-around: 2

To preserve a one-liner:

eksctl create cluster --name default-fargate --version 1.31 --region eu-west-1 --fargate --dry-run | yq '.addonsConfig.autoApplyPodIdentityAssociations=true' | eksctl create cluster -f -

Question

Since the "vpc--cni" addon is now installed by default, should the "autoApplyPodIdentityAssociations" setting also be set by default?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant