Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix arn build logic to support different aws partitions #7715

Merged
merged 1 commit into from
Apr 23, 2024

Conversation

timandy
Copy link
Contributor

@timandy timandy commented Apr 18, 2024

Description

Fix #7713

Checklist

  • Added tests that cover your change (if possible)
  • Added/modified documentation as required (such as the README.md, or the userdocs directory)
  • Manually tested
  • Made sure the title of the PR is a good description that can go into the release notes
  • (Core team) Added labels for change area (e.g. area/nodegroup) and kind (e.g. kind/improvement)

BONUS POINTS checklist: complete for good vibes and maybe prizes?! 🤯

  • Backfilled missing tests for code in same general area 🎉
  • Refactored something and made the world a better place 🌟

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello timandy 👋 Thank you for opening a Pull Request in eksctl project. The team will review the Pull Request and aim to respond within 1-10 business days. Meanwhile, please read about the Contribution and Code of Conduct guidelines here. You can find out more information about eksctl on our website

@cPu1 cPu1 added the kind/bug label Apr 18, 2024
Copy link
Contributor

@cPu1 cPu1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM but we lack access to cn-northwest-1 and are unable to test this. Can you share some output, redacting any sensitive information, from the create fargateprofile command, and if possible, for EKS connector and Karpenter as well?

@timandy
Copy link
Contributor Author

timandy commented Apr 19, 2024

Hi @cPu1 , I have tested create fargate-profile and karpenter on cn-northwest-1 region, the logs is here.

2024-04-19 11:12:02 [ℹ]  eksctl version 0.177.0-dev
2024-04-19 11:12:02 [ℹ]  using region cn-northwest-1
2024-04-19 11:12:02 [✔]  using existing VPC (vpc-xx) and subnets (private:map[cn-northwest-1a:{subnet-xx cn-northwest-1a 172.51.128.0/19 0 } cn-northwest-1b:{subnet-xx2 cn-northwest-1b 172.51.160.0/19 0 } cn-northwest-1c:{subnet-xx3 cn-northwest-1c 172.51.192.0/19 0 }] public:map[])
2024-04-19 11:12:02 [!]  custom VPC/subnets will be used; if resulting cluster doesn't function as expected, make sure to review the configuration of VPC/subnets
2024-04-19 11:12:02 [ℹ]  using Kubernetes version 1.29
2024-04-19 11:12:02 [ℹ]  creating EKS cluster "ktest" in "cn-northwest-1" region with Fargate profile
2024-04-19 11:12:02 [ℹ]  will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2024-04-19 11:12:02 [ℹ]  will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s)
2024-04-19 11:12:02 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=cn-northwest-1 --cluster=ktest'
2024-04-19 11:12:02 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "ktest" in "cn-northwest-1"
2024-04-19 11:12:02 [ℹ]  configuring CloudWatch logging for cluster "ktest" in "cn-northwest-1" (enabled types: api, audit, authenticator, controllerManager, scheduler & no types disabled)
2024-04-19 11:12:02 [ℹ]  
2 sequential tasks: { create cluster control plane "ktest", 
    6 sequential sub-tasks: { 
        wait for control plane to become ready,
        update CloudWatch log retention,
        create fargate profiles,
        associate IAM OIDC provider,
        2 sequential sub-tasks: { 
            create IAM role for serviceaccount "kube-system/aws-node",
            create serviceaccount "kube-system/aws-node",
        },
        restart daemonset "kube-system/aws-node",
    } 
}
2024-04-19 11:12:02 [ℹ]  building cluster stack "eksctl-ktest-cluster"
2024-04-19 11:12:03 [ℹ]  deploying stack "eksctl-ktest-cluster"
2024-04-19 11:12:33 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-cluster"
2024-04-19 11:13:03 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-cluster"
2024-04-19 11:14:03 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-cluster"
2024-04-19 11:15:03 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-cluster"
2024-04-19 11:16:03 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-cluster"
2024-04-19 11:17:03 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-cluster"
2024-04-19 11:18:03 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-cluster"
2024-04-19 11:19:03 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-cluster"
2024-04-19 11:21:03 [ℹ]  set log retention to 30 days for CloudWatch logging
2024-04-19 11:21:03 [ℹ]  creating Fargate profile "fp-default" on EKS cluster "ktest"
2024-04-19 11:23:13 [ℹ]  created Fargate profile "fp-default" on EKS cluster "ktest"
2024-04-19 11:23:43 [ℹ]  building iamserviceaccount stack "eksctl-ktest-addon-iamserviceaccount-kube-system-aws-node"
2024-04-19 11:23:43 [ℹ]  deploying stack "eksctl-ktest-addon-iamserviceaccount-kube-system-aws-node"
2024-04-19 11:23:43 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-addon-iamserviceaccount-kube-system-aws-node"
2024-04-19 11:24:13 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-addon-iamserviceaccount-kube-system-aws-node"
2024-04-19 11:24:13 [ℹ]  serviceaccount "kube-system/aws-node" already exists
2024-04-19 11:24:13 [ℹ]  updated serviceaccount "kube-system/aws-node"
2024-04-19 11:24:13 [ℹ]  daemonset "kube-system/aws-node" restarted
2024-04-19 11:24:13 [ℹ]  waiting for the control plane to become ready
2024-04-19 11:24:14 [✔]  saved kubeconfig as "/home/eks-admin/.kube/config"
2024-04-19 11:24:14 [ℹ]  no tasks
2024-04-19 11:24:14 [✔]  all EKS cluster resources for "ktest" have been created
2024-04-19 11:24:14 [✔]  created 0 nodegroup(s) in cluster "ktest"
2024-04-19 11:24:14 [✔]  created 0 managed nodegroup(s) in cluster "ktest"
2024-04-19 11:24:15 [ℹ]  1 task: { create karpenter for stack "ktest" }
2024-04-19 11:24:15 [ℹ]  building nodegroup stack "eksctl-ktest-karpenter"
2024-04-19 11:24:15 [ℹ]  deploying stack "eksctl-ktest-karpenter"
2024-04-19 11:24:15 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-karpenter"
2024-04-19 11:24:45 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-karpenter"
2024-04-19 11:25:38 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-karpenter"
2024-04-19 11:27:21 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-karpenter"
2024-04-19 11:27:21 [ℹ]  1 task: { create IAM role for serviceaccount "karpenter/karpenter" }
2024-04-19 11:27:21 [ℹ]  1 task: { create IAM role for serviceaccount "karpenter/karpenter" }
2024-04-19 11:27:21 [ℹ]  building iamserviceaccount stack "eksctl-ktest-addon-iamserviceaccount-karpenter-karpenter"
2024-04-19 11:27:21 [ℹ]  deploying stack "eksctl-ktest-addon-iamserviceaccount-karpenter-karpenter"
2024-04-19 11:27:21 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-addon-iamserviceaccount-karpenter-karpenter"
2024-04-19 11:27:51 [ℹ]  waiting for CloudFormation stack "eksctl-ktest-addon-iamserviceaccount-karpenter-karpenter"
2024-04-19 11:27:52 [ℹ]  adding identity "arn:aws-cn:iam::<myaccount-id>:role/eksctl-KarpenterNodeRole-ktest" to auth ConfigMap
2024-04-19 11:27:52 [ℹ]  adding Karpenter to cluster ktest
2024-04-19 11:28:55 [ℹ]  kubectl command should work with "/home/eks-admin/.kube/config", try 'kubectl get nodes'
2024-04-19 11:28:55 [✔]  EKS cluster "ktest" in "cn-northwest-1" region is ready


And the Trust Policy of FargatePodExecutionRole is

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "eks-fargate-pods.amazonaws.com"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "ArnLike": {
                    "aws:SourceArn": "arn:aws-cn:eks:cn-northwest-1:<myaccount-id>:fargateprofile/ktest/*"
                }
            }
        }
    ]
}

But I have no condition to test register an external cluster, because I don't have an external cluster.

@cPu1
Copy link
Contributor

cPu1 commented Apr 22, 2024

@timandy thanks for testing the two use cases. I will try to find out the ARN that should be used for EKS Connector.

Copy link
Contributor

@cPu1 cPu1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks for the quick fix.

@cPu1 cPu1 merged commit a6bc072 into eksctl-io:main Apr 23, 2024
9 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug] eksctl create fargate cluster in china region error
2 participants