Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[EKS/Fargate] [request]: Fargate custom security groups #625

Closed
tsndqst opened this issue Dec 4, 2019 · 12 comments
Closed

[EKS/Fargate] [request]: Fargate custom security groups #625

tsndqst opened this issue Dec 4, 2019 · 12 comments
Labels
EKS Amazon Elastic Kubernetes Service Fargate AWS Fargate

Comments

@tsndqst
Copy link

tsndqst commented Dec 4, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Tell us about your request
Add option to specify custom security groups for Fargate Profiles in EKS.

Which service(s) is this request for?
EKS

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
We control application access to RDS via VPC Security Groups. Without the option of specifying security groups in Fargate Profiles we would probably could not restrict RDS access down to only those things that need access. For example, there currently does not appear to be a way to utilize option 3 from https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.Scenarios.

Are you currently working around this issue?
Currently we are creating our own EC2 worker node groups in ASGs with specific security groups. This gives us the access control we need but means the pod density is very low on these hosts (the only apps that run on these nodes are those that are allowed to access a specific RDS instance).

There are other workarounds but they are all less than ideal:

  1. Open access to an entire EKS cluster. This results in one EKS cluster per app. If we use this method it would have high cost and low pod density.
  2. Open access to an entire subnet. If we use this method we would have to keep track to IP address allocation and many subnets. EKS clusters have to be recreated if we add/remove subnets.
  3. CNI Custom Networking with ENIConfig. EKS already has low pod density due to limitations in the current CNI. Reducing that further with Custom Networking makes this option very unattractive.
  4. Wait for the Next Generation CNI from AWS. While the features of this CNI sound great there is no indication it will be available any time soon.

Additional context
In addition to RDS there are other components that utilize security groups for access control such as ElastiCache and Elasticsearch Service.

Attachments
If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)

@tsndqst tsndqst added the Proposed Community submitted issue label Dec 4, 2019
@tabern tabern added the EKS Amazon Elastic Kubernetes Service label Dec 4, 2019
@ezra-freedman
Copy link

ezra-freedman commented Dec 17, 2019

I am in the same boat, thanks for submitting. I'm currently considering your workaround 2) (open access to entire subnet). Can you explain how your workaround 1) works? Is there an option to define a security group rule based on source EKS cluster?

  1. Open access to an entire EKS cluster. This results in one EKS cluster per app. If we use this method it would have high cost and low pod density.

@tsndqst
Copy link
Author

tsndqst commented Dec 17, 2019

@ezra-freedman I may not have described that well. I meant that you configure all workers for that cluster to have the same security group when you create them. Doing it this way would avoid targeting specific workloads (pods) to specific workers or worker groups. But you would have to deploy specific workloads to specific clusters.

@yann-soubeyrand
Copy link

Related to #609.

@mikestef9 mikestef9 added Fargate AWS Fargate and removed Proposed Community submitted issue labels Apr 19, 2020
@bsmedberg-xometry
Copy link

Can somebody from AWS guide me as to whether this is something I could submit a PR for (and if so, which repository), or whether this is managed by AWS-proprietary code?

@mikestef9 mikestef9 changed the title [EKS/Fargate] [request]: Fargate profile custom security groups [EKS/Fargate] [request]: Fargate custom security groups Sep 10, 2020
@tomwidmer
Copy link

Any chance this will land in Q1 or Q2 2021?

@bsmedberg-xometry Presumably this is Amazon proprietary code for Fargate. Given that it's already supported by both ECS Fargate and now EKS EC2 instances (since https://aws.amazon.com/blogs/containers/introducing-security-groups-for-pods/ ), it seems like most of the code to do it is already written.

@daisuke-yoshimoto
Copy link

@mikestef9
Any update on this?

@mikestef9
Copy link
Contributor

mikestef9 commented Jun 1, 2021

Hey all,

You can now assign custom security groups to pods running on AWS Fargate. This is available on v1.18 and above clusters, and you need to be running the latest EKS platform version for the corresponding Kubernetes minor version.

One important note to keep in mind - Previously, every Fargate pod got assigned the EKS cluster security group, which ensured the Fargate pod could communicate with the Kubernetes control plane and join the cluster. With custom security groups, you are responsible for ensuring the correct security group rules are opened to enable this communication. The easiest way to accomplish this is to simply specify the cluster security group ID as one of the custom security groups to assign to Fargate pods.

@quickbooks2018
Copy link

I upgraded my cluster to v1.20, with eks.2 platform version, and yet my Fargate pods are stuck in a pending state since 30min.

I whitelisted the DNS ports, and added the cluster security group inside the groupIds of the Security Group Policy that Im attaching to the pod.

There are proper security group annotations on the pod too.

Name:               xxxx
Namespace:            xxxxx
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 <none>
Labels:               app=xxxx
                      eks.amazonaws.com/fargate-profile=xxx
                      role=xxx
                      rollouts-pod-template-hash=85846d84b8
Annotations:          CapacityProvisioned: 0.25vCPU 0.5GB
                      Logging: LoggingDisabled: LOGGING_CONFIGMAP_NOT_FOUND
                      fargate.amazonaws.com/pod-sg: sg-xxxx,sg-xxxx,sg-xxxx
                      kubernetes.io/psp: eks.privileged
Status:               Pending

Something I may be missing?

It is very difficult to debug this issue, there are no errors anywhere.

image

@quickbooks2018
Copy link

image
Any Solution ???

@quickbooks2018
Copy link

image
Eks Version 19

@foxylion
Copy link

foxylion commented Oct 3, 2021

@Hunter-Thompson @quickbooks2018 You might better open a support ticket within the AWS console. The will help you quickly.

@Tzrlk
Copy link

Tzrlk commented Mar 18, 2022

@mikestef9, it looks like this ticket was closed prematurely, as the solution you offered doesn't actually solve the specified problem. At the moment, fargate profiles are added to the security group created during cluster creation that has open security rules both in and out.

What they and myself are after, is the ability to specify which security groups the fargate profiles are added to during profile specification, not the individual pods.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EKS Amazon Elastic Kubernetes Service Fargate AWS Fargate
Projects
None yet
Development

No branches or pull requests