-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't attach the control plane security group to managed nodegroups #4860
Don't attach the control plane security group to managed nodegroups #4860
Conversation
When creating Managed Nodegroups from an imported cluster, the control plane security group was getting attached to the worker nodes. Now only the cluster security group will get attached, which is consistent with the behavior of adding Managed Nodegroups to clusters created by eksctl.
@tompiscitell thanks for the nicely written PR. Core team will review it soon 👍🏻 |
Hi @tompiscitell. That document says earlier than 1.14. What makes you think this is not required for versions above 1.14? Oh, I see, I read the whole doc in stead of only that section. ;)
|
func (si *SpecConfigImporter) SecurityGroups() gfnt.Slice { | ||
return gfnt.Slice{si.ControlPlaneSecurityGroup(), si.ClusterSecurityGroup()} | ||
return gfnt.Slice{si.ClusterSecurityGroup()} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change only affects unowned clusters. For Owned clusters we are already doing this. I except this is actually not doing anything, but I have to further check the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see now. It is used, it's the same as managed cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change only affects unowned clusters. For Owned clusters we are already doing this. I except this is actually not doing anything, but I have to further check the code.
why would this change only affect unowned? I'm slightly confused :/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
owned cluster use the other implementation of the Importer
interface: StackConfigImporter
. See the if block here: https://github.com/weaveworks/eksctl/blob/main/pkg/actions/nodegroup/create.go#L169
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ahh makes sense, thank you!
@tompiscitell Ok, now that I understand the code a bit better, let me summarise. This change is only applicable to unowned clusters. Meaning, it's a cluster that So what you need:
Thanks! :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Request Manual Test steps then LGTM.
@Skarlso I made a small project for the manual test: https://github.com/tompiscitell/eks-testing in case you want to follow along. It uses the AWS CDK to create an EKS cluster and has some scripts to generate your cluster config and inspect the worker security groups. Here is my cluster config yaml with the cluster created via the AWS CDK (note: these are all throw away resources that are already deleted, so I'm not bothering to redact): apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a
region: us-west-2
vpc:
id: vpc-0aee84ee8845f35e0
securityGroup: sg-0d715dd9c05695143
subnets:
public:
us-west-2a:
id: subnet-07d14af61188fa94b
us-west-2b:
id: subnet-0db85e2c5476fd65c
private:
us-west-2b:
id: subnet-002a374fa1fee97aa
us-west-2a:
id: subnet-099811bceaee69927
managedNodeGroups:
- name: workers
amiFamily: Bottlerocket
privateNetworking: true
desiredCapacity: 1 I then used eksctl to create the manage NodeGroups specified in the config: $ ../eksctl/eksctl create nodegroup -f cluster.yml
2022-03-04 11:32:19 [ℹ] eksctl version 0.86.0-dev+c65b6911.2022-02-22T16:22:38Z
2022-03-04 11:32:19 [ℹ] using region us-west-2
2022-03-04 11:32:20 [ℹ] will use version 1.21 for new nodegroup(s) based on control plane version
2022-03-04 11:32:21 [!] no eksctl-managed CloudFormation stacks found for "Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a", will attempt to create nodegroup(s) on non eksctl-managed cluster
2022-03-04 11:32:22 [ℹ] nodegroup "workers" will use "" [Bottlerocket/1.21]
2022-03-04 11:32:22 [ℹ] 1 existing nodegroup(s) (ClusterNodegroupDefaultCapa-WAEdCEpBwbBD) will be excluded
2022-03-04 11:32:22 [ℹ] 1 nodegroup (workers) was included (based on the include/exclude rules)
2022-03-04 11:32:22 [ℹ] will create a CloudFormation stack for each of 1 managed nodegroups in cluster "Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a"
2022-03-04 11:32:22 [ℹ] 1 task: { 1 task: { 1 task: { create managed nodegroup "workers" } } }
2022-03-04 11:32:22 [ℹ] building managed nodegroup stack "eksctl-Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a-nodegroup-workers"
2022-03-04 11:32:23 [ℹ] deploying stack "eksctl-Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a-nodegroup-workers"
2022-03-04 11:32:23 [ℹ] waiting for CloudFormation stack "eksctl-Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a-nodegroup-workers"
2022-03-04 11:32:39 [ℹ] waiting for CloudFormation stack "eksctl-Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a-nodegroup-workers"
2022-03-04 11:32:59 [ℹ] waiting for CloudFormation stack "eksctl-Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a-nodegroup-workers"
2022-03-04 11:33:19 [ℹ] waiting for CloudFormation stack "eksctl-Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a-nodegroup-workers"
2022-03-04 11:33:38 [ℹ] waiting for CloudFormation stack "eksctl-Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a-nodegroup-workers"
2022-03-04 11:33:55 [ℹ] waiting for CloudFormation stack "eksctl-Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a-nodegroup-workers"
2022-03-04 11:34:15 [ℹ] waiting for CloudFormation stack "eksctl-Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a-nodegroup-workers"
2022-03-04 11:34:30 [ℹ] waiting for CloudFormation stack "eksctl-Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a-nodegroup-workers"
2022-03-04 11:34:46 [ℹ] waiting for CloudFormation stack "eksctl-Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a-nodegroup-workers"
2022-03-04 11:35:06 [ℹ] waiting for CloudFormation stack "eksctl-Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a-nodegroup-workers"
2022-03-04 11:35:07 [ℹ] no tasks
2022-03-04 11:35:07 [✔] created 0 nodegroup(s) in cluster "Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a"
2022-03-04 11:35:07 [ℹ] nodegroup "workers" has 1 node(s)
2022-03-04 11:35:07 [ℹ] node "ip-10-0-213-133.us-west-2.compute.internal" is ready
2022-03-04 11:35:07 [ℹ] waiting for at least 1 node(s) to become ready in "workers"
2022-03-04 11:35:07 [ℹ] nodegroup "workers" has 1 node(s)
2022-03-04 11:35:07 [ℹ] node "ip-10-0-213-133.us-west-2.compute.internal" is ready
2022-03-04 11:35:07 [✔] created 1 managed nodegroup(s) in cluster "Cluster9EE0221C-17854dabb51d466ebd19651517aaf92a"
2022-03-04 11:35:08 [ℹ] checking security group configuration for all nodegroups
2022-03-04 11:35:08 [ℹ] all nodegroups have up-to-date cloudformation templates And you can see it only attaches the cluster security group (the one created by EKS):
The node (the second one in the output) still joined the cluster and is ready. The other two are ones that the CDK creates by default:
|
@tompiscitell fantastic work! Thank you very much! |
Description
When creating Managed Nodegroups from an imported cluster, the control plane security group was getting attached to the worker nodes. This security group is supposed to be attached to the control plane ENIs according to the EKS docs. With this PR, only the cluster security group will get attached to Managed Nodegroups. This is consistent with the behavior of adding Managed Nodegroups to clusters created by eksctl.
Checklist
README.md
, or theuserdocs
directory)area/nodegroup
) and kind (e.g.kind/improvement
)BONUS POINTS checklist: complete for good vibes and maybe prizes?! 🤯