-
Notifications
You must be signed in to change notification settings - Fork 321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EKS] [request]: EKS managed node group support for ASG target group #709
Comments
@chingyi-lin can you help clarify your use case for this configuration vs. creating a Kubernetes service |
@tabern Unless I'm mistaken, one can not use a single NLB for several K8s services of type load balancer. For example, we want to be able to point ports 80 and 443 to our ingress controller service, but we also want port 22 to the SSH service of our GitLab. Also we want to be able to share our NLB between classic EC2 instances and EKS cluster to be able to do a zero downtime migration from the stateless application running on EC2 instances to the same application running on an EKS cluster. And the last use case we have is sharing a NLB between two EKS clusters (blue and green) to be able to seamlessly switch from one to the other (in case we have big changes to bring to our cluster, we prefer spawning a new cluster and switching to it after having tested that it works as intended). |
I have a workaround in terraform (a bit tricky but it works):
|
@dawidmalina your workaround works for adding the autoscaling instances to the load balancer target group, however, the ALB can't reach the node group.
|
Another workaround I plan to test is to add postStart and preStop lifecycle event on the pod (https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/) with a little command that register / deregister from the target group using aws cli. You can easily get instanceId from within the container (wget -q -O - http://169.254.169.254/latest/meta-data/instance-id) and use it on aws elbv2 register-targets . |
Hey all, please take a look at the TargetGroupBinding CRD included in the v2 release candidate of the ALB ingress controller https://github.com/kubernetes-sigs/aws-alb-ingress-controller/releases/tag/v2.0.0-rc0 We believe this will address the feature request described in this issue, and are looking for feedback. |
Hi @mikestef9, thanks for the update. Unfortunately, this does not address our use cases outlined in this comment #709 (comment). |
We also need this to support services of |
@yann-soubeyrand are you tring to use multiple ASG in a single TargetGroup? otherwise, TargetGroupBinding should solve it. |
@M00nF1sh isn't TargetGroupBinding meant for use with ALB ingress controller only? We use NLB with Istio ingress gateway. And yes, we need to put two ASG in a single target group for certain operations requiring zero downtime. |
@yann-soubeyrand |
@M00nF1sh sorry for the late reply. We need to be able to put two ASGs from different clusters in a single target group. This is how we do certain migrations requiring rebuilding a whole cluster. |
A null_resource is working for me, I have validated that
|
Thank you so much for this workaround. Totally made my week by helping me solve a very annoying problem we've been having for so long! |
Thank you for this workaround but it seems to be leaving behind ENI's and SG's that are preventing VPC destruction due to it creating resources outside of terraforms knowledge. Is there any way to achieve this with an NLB without using a null provisioner? Or some way to have an on_delete provisioner that does the cleanup? |
does https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/nlb/ will not solve your challenges? |
I used @dawidmalina 's answer, and also opened up the NodePort to the ALB's SG using
|
Attaching Load Balancers to Auto Scaling Group instances, as opposed to instance IP addresses and ports, was a design pattern that made a lot of sense back when the instances in ASGs were configured exactly alike -- typically there was an application stack running on each instance that had identical software, listened on the same ports, served the same traffic for each, etc. But with containers, that pattern generally no longer holds true: each instance could be (and usually is) running completely different applications, listening on different ports or even different interfaces. In the latter design, instances are now heterogeneous. The Auto Scaling Group no longer implies homogeneity; it's now merely a scalable capacity provider for memory, CPUs, GPUs, network interfaces, etc. As a consequence, we no longer think of an instance as a backend (i.e., a load balancer target); today, we consider an IP:port tuple to be a backend instead. I've heard a few justifications for hanging on to the historical functionality, despite the evolution. So I'm curious: for those of you dealing with this issue, is there a particular reason you're not using DNS to handle migrations of applications between clusters (each with their own ingress LBs) for north-south traffic, and/or using some sort of service mesh (App Mesh, Istio, Linkerd, etc.) to handle migrations for east-west traffic? These are what we prescribe as best practices today. |
@otterley Yea, because we are migrating an app off bare metal and on to k8. We have all those fancy things on the roadmap (service mesh, ingress controllers, dns, etc) but we're in the middle of moving a decades old application and trying the best we can to make it cloud-native but there's a lot of uncoupling to do. In the meantime we need to leverage the "old ways" to allow us to transition. It's rare to be able to start with a fresh new project and do everything right from the beginning. We rely on ASG's to allow us to continue using k8 with our old vm-in-a-container images. |
@ddvdozuki Thanks for the insight. Since you're still in transition, might I recommend you use unmanaged node groups instead? That will allow you to retain the functionality you need during your migration. Then, after you have migrated to the next generation of load balancers using the Load Balancer Controller's built-in Ingress support (and cut over DNS), you can attach a new Managed Node Group to your cluster, migrate your pods, and the load balancer will continue to send them traffic. The controller will ensure that the target IP and port follows the pod as it moves. Once all your pods have migrated to Managed Node Groups, you can tear down the unmanaged node groups. |
We have a single DNS entry point (i.e. api.example.com) that points to a single ALB, with a Target Group that points to our Traefik entrypoint. Traefik is running as a DaemonSet on each Node. Traefik is then used to route requests to the appropriate service/pod. There may well be a better approach to this, which I'd be curious to hear, but this is working well for us. |
@mwalsher It sounds like you might have a redundant layer there. The k8 service can do most of what traefik can do as far as routing and pod selection. We use the same setup you have but without any additional layer in between. Just an LB pointing at the node port for the service and the service has selectors for the proper pods |
@ddvdozuki interesting, thanks for the info. Can we route e.g. api.example.com/contacts to our Contacts microservice and api.example.com/accounts to Accounts microservice using the k8 service routing? I took a quick look at the k8s Service docs and don't see anything on path-based routing, but it is probable that my ☕ hasn't kicked in yet. We are also using some Traefik middleware (StripPrefix and ForwardAuth). I suppose we could use the ALB for routing to the appropriate TG/Service port. Perhaps that's what you meant? But we'd still need the aforementioned middleware... |
yes you need middleware, but general practice is to use such ingress controller which is served thru Loadbalancer service. Running such middleware as Daemonset is just unpractical when you have more nodes, because you wasting resources. |
There's also our use case which is more aking to @mwalsher. We create and destroy namespaces near constantly. Every CI branch that people make creates a new namespace with a full (scaled down) copy of our software stack. That lets our engineers connect their IDE to that running stack and dev against it in isolation from each other. So we have an Nginx ingress controller that can handle that kind of churn. Meaning we create and destroy up to dozens of namespaces per day each one with a unique URL and certificate. This is all behind an NLB currently so Cert Manager can provision certs for these namespaces on the fly. Provisioning a load balancer per namespace in that use case is really expensive both monetarily and in the delay in wiring up our system. Not to mention it makes the domains pretty hard to deal with. |
Another use case for this is having VoIP application on the nodes which handles 20k UDP ports. You can't solve it with "Service: LoadBalancer" at the moment. The only option is to use |
Sadly this workaround only works if you first create the
When I add a new node group and attach the target group using this method I get
And using the AWS CLI with null-resource is rather messy and leaves "orphan" resources Is the We also want to disable the This is the full hack we were considering but I think we are going to backtrack to ASGs
|
Did you ever get that working @jodem ? |
Hello I ended up using "aws_autoscaling_attachment" in terraform
|
Community Note
Tell us about your request
The ability to attach a load balancer to the ASG created by the EKS managed node group at cluster creation with cloudformation.
Which service(s) is this request for?
EKS
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
We used to create an unmanaged node group with ASG and a classic load balancer in the same cloudformation stack. We used
!Ref
to attach the load balancer to the ASG usingTargetGroupARNs
. However, the configuration is not available in eks managed node group at cluster creation today.Are you currently working around this issue?
We need to separate the creation of cluster and the load balancer into two stacks while they have the same lifecycle. Besides, we are not sure if this modification to ASG is allowed and supported since the ASG is managed by EKS.
The text was updated successfully, but these errors were encountered: