-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(eks): private endpoint access doesn't work with Vpc.fromLookup
#9544
Conversation
Title does not follow the guidelines of Conventional Commits. Please adjust title before merge. |
Vpc.fromLookup
put my 2 cents in the PR as well. Hi @iliapolo , thx a lot for raising this. really appreciate. Just my 2 cents on the subnet selections for the kubectl handler Lambda given the enterprise context: i. In most cases, we will configure vpcSubnets when calling eks.Cluster or FargateCluster to create eks clusters, so we can make sure the ENIs of the EKS control plane will be put into the subnets as required. But regardless the vpcSubnets is set or not, I guess by default the kubectl handler Lambda can just be associated with the subnets associated with the EKS Control plane. ii. However, that might not be enough. This kubectl handler Lambda fundamentally is applying kubectl/helm, so arguably it's mostly doing the tasks of the data plane. So it would be great if we have a separate subnet selection option in eks.Cluster & eks.FargateClusters specifically for the kubectl handler Lambda, sth like kubectlSubnets along with kubectlEnvironment. Having said that, I fully appreciate that you need to address this issue by considering the whole thing holistically. So please feel free to ignore this suggestion if it's contradicting to other factors you might need to consider. Thx a lot in advance. Jasper |
Hi @JasperW01 My concern with providing a As I understand, you are talking about an option to specify just a subset of private subnets from the cluster subnets, right? Do you have a specific use case where using all private subnets of the cluster would be somehow insufficient or harmful? Thanks |
Hi @iliapolo , second thought after reading your feedback - to make it simple I think let's just treat the kubectl handler as part of the control plane, so we can just align its subnet association with whatever subnets selected for the control plane. Please allow me to recap our use case so we can be a bit clear, i. we have several artificial security zones, each security zone has one subnet per AZ. So we together we have many subnets, and as a general rule, each workload needs to be positioned to the 3 subnets corresponding to the security zone the workload is classified with. EKS cluster (both control plane and data plane) is no difference. ii. So when we create EKS cluster control plane in cdk code, we need to specify vpcSubnets in eks.Cluster/FargateCluster to make sure the two ENIs of the control plane are positioned within the 3 subnets corresponding to the proper security zone. iii. Then in cdk code we create EKS worker node groups or Fargate profiles separately and specify the subnetSelection to position the worker node groups or Fargate profiles to the subnets corresponding to their proper security zones respectively. iv. BTW, to make the above approach work, beforehand we already tagged all the required subnets with kubernetes.io/cluster/ = 'shared" & kubernetes.io/role/elb (internal-elb) = '1'. As a side point, this is sth we hope we don't need to do - but it's a problem for another day. So now let's put the kubectl handler lambda into this picture. To keep it simple, I think we can just treat it as part of the EKS control plane, i.e. associate such Lambda with the subnets selected for the EKS control plane. In other words, using all private subnets of the cluster subnet selection would be good enough. |
Hi @iliapolo BTW, I double checked the code changes in this PR and I believe it will work in our context. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add the root cause to the PR description.
Thank you for contributing! Your pull request will be updated from master and then merged automatically (do not update manually, and be sure to allow changes to be pushed to your fork). |
Thank you for contributing! Your pull request will be updated from master and then merged automatically (do not update manually, and be sure to allow changes to be pushed to your fork). |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
Fixes #9542
Related #5383
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license