Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[aws-eks] AWS load balancer controller support #8836

Closed
1 of 2 tasks
zxkane opened this issue Jul 1, 2020 · 22 comments · Fixed by #17618
Closed
1 of 2 tasks

[aws-eks] AWS load balancer controller support #8836

zxkane opened this issue Jul 1, 2020 · 22 comments · Fixed by #17618
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service effort/medium Medium work item – several days of effort feature-request A feature should be added or improved. p1

Comments

@zxkane
Copy link
Contributor

zxkane commented Jul 1, 2020

It's a common use case to deploy ALB ingress controller on EKS, it would be helpful to support it in L2 class level.

Use Case

Deploy ALB ingress controller for using ALB to deploy ingress of K8S.

Proposed Solution

Might implement a new L2 class ALBIngressController like below,

import * as yaml from 'js-yaml';
import * as request from 'sync-request';

export interface ALBIngressControllerProps {
    readonly cluster: Cluster;
    readonly version: string;
    readonly vpcId: string;
}

class ALBIngressController extends Construct {
   constructor(scope: Construct, id: string, props: ALBIngressControllerProps) {
      
      const albBaseResourceBaseUrl = `https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/${props.version}/docs/examples/`;

      const albIngressControllerPolicyUrl = `${albBaseResourceBaseUrl}iam-policy.json`;
      const albNamespace = 'kube-system';
      const albServiceAccount = props.cluster.addServiceAccount('alb-ingress-controller', {
          name: 'alb-ingress-controller',
          namespace: albNamespace,
       });

     const policyJson = request('GET', albIngressControllerPolicyUrl).getBody();
     ((JSON.parse(policyJson))['Statement'] as []).forEach((statement, idx, array) => {
         albServiceAccount.addToPolicy(iam.PolicyStatement.fromJson(statement));
     });

      const rbacRoles = yaml.safeLoadAll(request('GET', `${albBaseResourceBaseUrl}rbac-role.yaml`).getBody())
          .filter((rbac: any) => { return rbac['kind'] != 'ServiceAccount' });
       const albDeployment = yaml.safeLoad(request('GET', `${albBaseResourceBaseUrl}alb-ingress-controller.yaml`).getBody());

      const albResources = props.cluster.addResource('aws-alb-ingress-controller', ...rbacRoles, albDeployment);

     const albResourcePatch = new eks.KubernetesPatch(this, `alb-ingress-controller-patch-${props.version}`, {
      cluster,
      resourceName: "deployment/alb-ingress-controller",
      resourceNamespace: albNamespace,
      applyPatch: {
        spec: {
          template: {
            spec: {
              containers: [
                {
                  name: 'alb-ingress-controller',
                  args: [
                    '--ingress-class=alb',
                    '--feature-gates=wafv2=false',
                    `--cluster-name=${props.cluster.clusterName}`,
                    `--aws-vpc-id=${props.vpcId}`,
                    `--aws-region=${stack.region}`,
                  ]
                }
              ]
            }
          }
        }
      },
      restorePatch: {
        spec: {
          template: {
            spec: {
              containers: [
                {
                  name: 'alb-ingress-controller',
                  args: [
                    '--ingress-class=alb',
                    '--feature-gates=wafv2=false',
                    `--cluster-name=${props.cluster.clusterName}`,
                  ]
                }
              ]
            }
          }
        }
      },
    });
    albResourcePatch.node.addDependency(albResources);
   }
}

Other

  • 👋 I may be able to implement this feature request
  • ⚠️ This feature might incur a breaking change

This is a 🚀 Feature Request

@zxkane zxkane added feature-request A feature should be added or improved. needs-triage This issue or PR still needs to be triaged. labels Jul 1, 2020
@github-actions github-actions bot added the @aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service label Jul 1, 2020
@eladb eladb changed the title [eks] Support deploying ALB ingress controller via L2 class [eks] ALB ingress controller support Jul 8, 2020
@eladb eladb added the effort/small Small work item – less than a day of effort label Jul 8, 2020
@eladb eladb modified the milestone: EKS Dev Preview Jul 22, 2020
@pgollucci
Copy link

pgollucci commented Jul 24, 2020

Definitely need this one. AFAIK, you have to use eks.Cluster and not eks.FargateCluster without this because the Nginx one makes an ELB that doesn't work with Fargate pods.

Did I miss something?

@iliapolo iliapolo added p2 and removed needs-triage This issue or PR still needs to be triaged. labels Aug 16, 2020
@iliapolo iliapolo changed the title [eks] ALB ingress controller support [aws-eks] ALB ingress controller support Aug 16, 2020
@chillitom
Copy link
Contributor

👍 This would be a fantastic construct to include, I have just spent a painful day rolling my own

@iliapolo
Copy link
Contributor

@zxkane This is great - would you be interested in creating a PR with your proposed solution?

@zxkane
Copy link
Contributor Author

zxkane commented Nov 27, 2020

@zxkane This is great - would you be interested in creating a PR with your proposed solution?

Will do it.

@zxkane zxkane changed the title [aws-eks] ALB ingress controller support [aws-eks] AWS load balancer controller support Nov 27, 2020
@chillitom
Copy link
Contributor

chillitom commented Nov 27, 2020

Here is a version I updated for the latest chart.

export interface IAlbIngressControllerProps {
    readonly cluster: eks.ICluster
    readonly vpcId: string
    readonly region: string
}

export class AlbIngressController extends cdk.Construct {
    constructor(scope: cdk.Construct, id: string, props: IAlbIngressControllerProps) {
        super(scope, id)

        // https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/v2_0_1_full.yaml
        const albBaseResourceBaseUrl = `https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/`

        const albIngressControllerPolicyUrl = `${albBaseResourceBaseUrl}iam_policy.json`
        const albNamespace = 'kube-system'
        const albServiceAccount = props.cluster.addServiceAccount('alb-ingress-controller', {
            name: 'alb-ingress-controller',
            namespace: albNamespace,
        })

        const policyJson = request('GET', albIngressControllerPolicyUrl).getBody('utf8');
        ((JSON.parse(policyJson))['Statement'] as any[]).forEach(statement => {
            albServiceAccount.addToPolicy(iam.PolicyStatement.fromJson(statement))
        })

        const albDeployment = yaml.safeLoadAll(request('GET', `${albBaseResourceBaseUrl}v2_0_1_full.yaml`).getBody('utf8'))

        const albResources = props.cluster.addManifest('aws-alb-ingress-controller', ...albDeployment)

        const albResourcePatch = new eks.KubernetesPatch(this, `alb-ingress-controller-patch`, {
            cluster: props.cluster,
            resourceName: 'deployment/alb-ingress-controller',
            resourceNamespace: albNamespace,
            applyPatch: {
                spec: {
                    template: {
                        spec: {
                            containers: [
                                {
                                    name: 'alb-ingress-controller',
                                    args: [
                                        '--ingress-class=alb',
                                        '--feature-gates=wafv2=false',
                                        `--cluster-name=${props.cluster.clusterName}`,
                                        `--aws-vpc-id=${props.vpcId}`,
                                        `--aws-region=${props.region}`,
                                    ]
                                }
                            ]
                        }
                    }
                }
            },
            restorePatch: {
                spec: {
                    template: {
                        spec: {
                            containers: [
                                {
                                    name: 'alb-ingress-controller',
                                    args: [
                                        '--ingress-class=alb',
                                        '--feature-gates=wafv2=false',
                                        `--cluster-name=${props.cluster.clusterName}`,
                                    ]
                                }
                            ]
                        }
                    }
                }
            },
        })
        albResourcePatch.node.addDependency(albResources)
    }
}

note it has a dependency on cert-manager so add a dependency on this:

        const certManagerChart = new eks.HelmChart(this,
            'cert-manager',
            {
                cluster,
                createNamespace: true,
                namespace: 'cert-manager',
                repository: 'https://charts.jetstack.io',
                chart: 'cert-manager',
                release: 'cert-manager',
                values: {
                    // https://github.com/jetstack/cert-manager/blob/master/deploy/charts/cert-manager/values.yaml
                    installCRDs: true,
                },
                version: 'v1.1.0'
            })

            albController.node.addDependency(certManagerChart)

@iliapolo iliapolo added effort/medium Medium work item – several days of effort and removed effort/small Small work item – less than a day of effort labels Nov 27, 2020
@chillitom
Copy link
Contributor

This might make a good starting point: https://www.npmjs.com/package/cdk8s-aws-load-balancer-controller

@BriceGestas
Copy link

BriceGestas commented Jan 27, 2021

We had the same issue and unfortunately the HelmChart part for Cert Manager did not work for us.

Here is a version we made which works fine using manifests on official documentation but can be improved of course.
We encountered some CloudFormation limitations (the size of event payload which cannot exceed 262144 bytes for example) and made some workaround so it can work correctly.

Versions:

  • CDK: 1.86.0
  • AWS Load Balancer Ingress Controller: 2.1.0
  • Cert Manager: 1.1.0

Alb Ingress Controller deployment

private deployAlbIngressController() {
        const certManagerResult = this.awsCertManagerService.deployCertManager(this.cluster);
        const albIngressControllerProps = {
            cluster: this.cluster,
            region: this.conf.provisionConf.awsRegion,
            vpcId: this.conf.provisionConf.awsVpcIntegration?.vpcId,
            platformName: this.conf.platformName,
            deploymentDir: this.deploymentDir,
            waitCondition: certManagerResult.waitCondition
        };
        new AwsAlbIngressController(this.cluster, CDKNamingUtil.k8sALBIngressController(this.conf.platformName), albIngressControllerProps);
    }

AwsCertManagerService.ts

import {log} from "../log";
import * as styles from "../styles";
import * as fs from "fs";
import * as path from "path";
import * as jsYaml from "js-yaml";
import * as eks from "@aws-cdk/aws-eks";
import CDKNamingUtil from "../util/CDKNamingUtil";
import {Configuration} from "../configuration";
import {AwsPlatform} from "../model/Configuration";
import * as cdk from "@aws-cdk/core";

export interface DeployCertManagerResult {
    cdkManifests: eks.KubernetesManifest[];
    waitCondition: cdk.CfnWaitCondition;
}

interface K8sManifestJson {
    kind: string;
    metadata: {
        name: string;
    };
}

interface ManifestGroup {
    manifests: K8sManifestJson[];
    size: number;
}

export default class AwsCertManagerService {

    deploymentDir: string;
    conf: Configuration<AwsPlatform>;

    constructor(conf: Configuration<AwsPlatform>, deploymentDir: string) {
        this.conf = conf;
        this.deploymentDir = deploymentDir;
    }
    /*
    * Returns the parent Construct so we can depend on it when deploying ALB Ingress Controller
     */
    deployCertManager(cluster: eks.Cluster): DeployCertManagerResult {
        log(styles.title(`*** Deploying Kubernetes cert-manager ***`))
        // https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/deploy/installation/
        const certManagerManifest = fs.readFileSync(path.join(this.deploymentDir, 'resources', 'alb-ingress-controller', 'cert-manager-v1.1.0.yaml'), {encoding: 'utf8'});
        const manifests: K8sManifestJson[] = jsYaml.loadAll(certManagerManifest);

        const groups: ManifestGroup[] = this.splitManifestsInGroups(manifests);

        groups.forEach((group, groupIndex) => {
            console.log(`cert-manager manifests group ${groupIndex}: ${group.manifests.length} manifests, size: ${group.size} bytes `)
        });

        const cdkManifests = groups.map((group, groupIndex) => {
            return new eks.KubernetesManifest(cluster, `${CDKNamingUtil.k8sCertManager(this.conf.platformName)}-part-${groupIndex}`, {
                cluster: cluster,
                manifest: group.manifests,
                overwrite: true
            });
        });

        // Define a wait condition and handle for cert manager to be fully deployed
        const waitConditionHandle = new cdk.CfnWaitConditionHandle(cluster, CDKNamingUtil.k8sCertManagerWaitConditionHandle(this.conf.platformName));
        const waitCondition = new cdk.CfnWaitCondition(cluster, CDKNamingUtil.k8sCertManagerWaitCondition(this.conf.platformName), {
            count: 1,
            handle: waitConditionHandle.ref,
            timeout: '600',
        });
        for (let certManagerManifest of cdkManifests) {
            waitConditionHandle.node.addDependency(certManagerManifest);
        }

        const certManagerWaitConditionSignal = cluster.addManifest(CDKNamingUtil.k8sCertManagerWaitConditionSignal(this.conf.platformName), {
            kind: "Pod",
            apiVersion: "v1",
            metadata: {
                name: CDKNamingUtil.k8sCertManagerWaitConditionSignal(this.conf.platformName),
                namespace: "default"
            },
            spec: {
                initContainers:
                    [{
                        name: "wait-cert-manager-service",
                        image: "busybox:1.28",
                        command: ['sh', '-c', 'echo begin sleep && sleep 60 && echo end sleep']
                    }],
                containers:
                    [{
                        name: "cert-manager-waitcondition-signal",
                        image: "curlimages/curl:7.74.0",
                        args: [
                            '-vvv',
                            '-X',
                            'PUT',
                            '-H', 'Content-Type:',
                            '--data-binary', '{"Status" : "SUCCESS","Reason" : "Configuration Complete", "UniqueId" : "ID1234", "Data" : "Cert manager should be ready by now."}',
                            waitConditionHandle.ref
                        ]
                    }],
                restartPolicy: "Never"
            }
        })
        certManagerWaitConditionSignal.node.addDependency(waitConditionHandle)

        return {
            cdkManifests,
            waitCondition
        };
    }

    private splitManifestsInGroups(manifests: K8sManifestJson[]): ManifestGroup[] {
        // Max payload size for CloudFormation event is 262144 bytes
        // (we got that information from an error message, not from the doc)
        const maxGroupSize = Math.floor(262144 * .8)
        const groups: ManifestGroup[] = []

        // Splitting all manifest in groups so total size of group is less than 262144 bytes
        manifests.forEach(manifest => {
            const manifestSize = JSON.stringify(manifest).length;
            console.log(`cert-manager manifest '${manifest.kind}/${manifest?.metadata?.name}' size is ${manifestSize} characters`);
            const lastGroup = (groups.length && groups[groups.length - 1]) || null;
            if (lastGroup === null || (lastGroup.size + manifestSize) > maxGroupSize) {
                groups.push({
                    manifests: [manifest],
                    size: manifestSize
                });
            } else {
                lastGroup.manifests.push(manifest);
                lastGroup.size += manifestSize;
            }
        });

        return groups;
    }
}

AwsAlbIngressController .ts

import * as cdk from "@aws-cdk/core";
import * as eks from "@aws-cdk/aws-eks";
import * as iam from "@aws-cdk/aws-iam";
import * as jsYaml from "js-yaml";
import * as fs from "fs";
import * as path from "path";
import CDKNamingUtil from "../util/CDKNamingUtil";

export interface IAlbIngressControllerProps {
    readonly cluster: eks.Cluster;
    readonly vpcId?: string;
    readonly region: string;
    readonly deploymentDir: string;
    readonly platformName: string;
    readonly waitCondition: cdk.CfnWaitCondition;
}

const AWS_LOAD_BALANCER_CONTROLLER = 'aws-load-balancer-controller';

export class AwsAlbIngressController extends cdk.Construct {

    constructor(scope: cdk.Construct, id: string, props: IAlbIngressControllerProps) {

        super(scope, id);

        // If stack is deployed again, make sure this service is well deleted (see inside Lens tool)
        const albNamespace = 'kube-system';
        const albServiceAccount = props.cluster.addServiceAccount(AWS_LOAD_BALANCER_CONTROLLER, {
            name: AWS_LOAD_BALANCER_CONTROLLER,
            namespace: albNamespace
        });

        const policy: { Statement: any[] } = JSON.parse(fs.readFileSync(path.join(props.deploymentDir, 'resources', 'alb-ingress-controller', 'iam-policy.json'), {encoding: 'utf8'}));
        policy.Statement.forEach(statement => albServiceAccount.addToPrincipalPolicy(iam.PolicyStatement.fromJson(statement)))

        let albManifest = fs.readFileSync(path.join(props.deploymentDir, 'resources', 'alb-ingress-controller', 'alb-ingress-controller-v2.1.0.yaml'), {encoding: 'utf8'});
        albManifest = albManifest.replace(/your-cluster-name/g, CDKNamingUtil.kubernetesClusterName(props.platformName));
        const ingressControllerManifest = new eks.KubernetesManifest(this, CDKNamingUtil.k8sALBIngressController(props.platformName), {
            cluster: props.cluster,
            manifest: jsYaml.loadAll(albManifest),
            overwrite: true
        });

        ingressControllerManifest.node.addDependency(props.waitCondition);
    }
}

We had to update manifests because of original descriptions formatting not readabled as is by CloudFormation

@vsetka
Copy link

vsetka commented Jan 27, 2021

@BriceGestas how did you create the OIDC provider in CDK?

@BriceGestas
Copy link

Hello @vsetka ,

Actually I did not create OIDC provider myself. This is done via the addServiceAccount method from cluster. If you look at the source code of CDK:

public addServiceAccount(id: string, options: ServiceAccountOptions = {}): ServiceAccount {
    return new ServiceAccount(this, id, {
      ...options,
      cluster: this,
    });
  }

And the constructor of ServiceAccount is

constructor(scope: Construct, id: string, props: ServiceAccountProps) {
    super(scope, id);

    const { cluster } = props;
    this.serviceAccountName = props.name ?? Names.uniqueId(this).toLowerCase();
    this.serviceAccountNamespace = props.namespace ?? 'default';

    /* Add conditions to the role to improve security. This prevents other pods in the same namespace to assume the role.
    * See documentation: https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html
    */
    const conditions = new CfnJson(this, 'ConditionJson', {
      value: {
        [`${cluster.openIdConnectProvider.openIdConnectProviderIssuer}:aud`]: 'sts.amazonaws.com',
        [`${cluster.openIdConnectProvider.openIdConnectProviderIssuer}:sub`]: `system:serviceaccount:${this.serviceAccountNamespace}:${this.serviceAccountName}`,
      },
    });
    const principal = new OpenIdConnectPrincipal(cluster.openIdConnectProvider).withConditions({
      StringEquals: conditions,
    });
    this.role = new Role(this, 'Role', { assumedBy: principal });

    this.assumeRoleAction = this.role.assumeRoleAction;
    this.grantPrincipal = this.role.grantPrincipal;
    this.policyFragment = this.role.policyFragment;
}

@lkr2des
Copy link

lkr2des commented May 6, 2021

@BriceGestas I am trying to get cert-manager installed via opencdk8s lb chart and probably running into same issue of some limitation where my stack hangs.I came across this issue.What you've done is something I would like to do, but our code is in python and I am not familiar with TS.Do you have any example of how the same implementation of what you've wrote could be achieved via python.Thanks

@zacyang
Copy link

zacyang commented May 18, 2021

@BriceGestas what you got is a decent solution (I haven't ried) but I did have an issue where the cert-manage.yaml is 1.8mb where hangs on my stack, possible cause was the file size.
My question is given you already has wait condition, do you still need the certManagerWaitConditionSignal ?

@BriceGestas
Copy link

@BriceGestas I am trying to get cert-manager installed via opencdk8s lb chart and probably running into same issue of some limitation where my stack hangs.I came across this issue.What you've done is something I would like to do, but our code is in python and I am not familiar with TS.Do you have any example of how the same implementation of what you've wrote could be achieved via python.Thanks

Sorry, I am not a Python code writer so I won't be able to help you unfortunately...

@BriceGestas
Copy link

Hello @zacyang
From what I understood about the wait codition, yes you need the wait condition signal which will trigger the wait condition itself.
So we will wait for 60 seconds before triggering the wait condition.

@zacyang
Copy link

zacyang commented May 18, 2021

For the unfortunate soul got the same issue what @BriceGestas referred is here https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-waitcondition.html

Also tried this morning, it can install cert-manager, however, for some reasons, the installed cert-manger are not working correctly, still digging out why. Could be the sequance is messed up

@NGL321 NGL321 added p1 and removed p2 labels May 21, 2021
@briancaffey
Copy link

I posted a SO question related to this: https://stackoverflow.com/questions/67762104/how-to-install-aws-load-balancer-controller-in-a-cdk-project

I have also been trying to do this. I am trying to write a high-level CDK construct that can be used to deploy Django applications with EKS. I have most of the k8s manifests defined for the application, but I am struggling with the Ingress part. Looking into different options, I have decided to try installing the AWS Load Balancer Controller (https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/installation/). Their documentation has instructions for installing the controller using the AWS CLI and the eksctl CLI tool, so I'm working on trying to translate these into CDK code. Here's what I have so far:

import * as ec2 from '@aws-cdk/aws-ec2';
import * as eks from '@aws-cdk/aws-eks';
import * as iam from '@aws-cdk/aws-iam';
import * as cdk from '@aws-cdk/core';
import { ApplicationVpc } from './vpc';
var request = require('sync-request');


export interface DjangoEksProps {
  readonly vpc: ec2.IVpc;
}


export class DjangoEks extends cdk.Construct {

  public vpc: ec2.IVpc;
  public cluster: eks.Cluster;

  constructor(scope: cdk.Construct, id: string, props: DjangoEksProps) {
    super(scope, id);

    this.vpc = props.vpc;


    // allow all account users to assume this role in order to admin the cluster
    const mastersRole = new iam.Role(this, 'AdminRole', {
      assumedBy: new iam.AccountRootPrincipal(),
    });

    this.cluster = new eks.Cluster(this, "MyEksCluster", {
      version: eks.KubernetesVersion.V1_19,
      vpc: this.vpc,
      mastersRole,
      defaultCapacity: 2,
    });

    // Adopted from comments in this issue: https://github.com/aws/aws-cdk/issues/8836
    const albServiceAccount = this.cluster.addServiceAccount('aws-alb-ingress-controller-sa', {
      name: 'aws-load-balancer-controller',
      namespace: 'kube-system',
    });

    const awsAlbControllerPolicyUrl = 'https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.2.0/docs/install/iam_policy.json';
    const policyJson = request('GET', awsAlbControllerPolicyUrl).getBody('utf8');
    ((JSON.parse(policyJson))['Statement'] as any[]).forEach(statement => {
      albServiceAccount.addToPrincipalPolicy(iam.PolicyStatement.fromJson(statement))
    })

    // This is where I am stuck
    // https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/installation/#add-controller-to-cluster

    // I tried running this
    // kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"

    this.cluster.addHelmChart('aws-load-balancer-controller-helm-chart', {
      repository: 'https://aws.github.io/eks-charts',
      chart: 'eks/aws-load-balancer-controller',
      release: 'aws-load-balancer-controller',
      version: '2.2.0',
      namespace: 'kube-system',
      values: {
        clusterName: this.cluster.clusterName,
        serviceAccount: {
          create: false,
          name: 'aws-load-balancer-controller',
        },
      },
    });
  }
}

Here are the errors I am seeing in CDK when I do cdk deploy:

Received response status [FAILED] from custom resource. Message returned: Error: b'WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /tmp/kubeconfig\nWARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /tmp/kubeconfig\nRelease "aws-load-balancer-controller" does not exist. Installing it now.\nError: chart "eks/aws-load-balancer-controller" version "2.2.0" not found in https://aws.github.io/eks-charts repository\n' Logs: /aws/lambda/DjangoEks-awscdkawseksKubectlProvi-Handler886CB40B-6yld0A8rw9hp at invokeUserFunction (/var/task/framework.js:95:19) at processTicksAndRejections (internal/process/task_queues.js:93:5) at async onEvent (/var/task/framework.js:19:27) at async Runtime.handler (/var/task/cfn-response.js:48:13) (RequestId: ec066bb2-4cc1-48f6-8a88-c6062c27ed0f)

and a related error:

Received response status [FAILED] from custom resource. Message returned: Error: b'error: no objects passed to create\n' Logs: /aws/lambda/DjangoEks-awscdkawseksKubectlProvi-Handler886CB40B-6yld0A8rw9hp at invokeUserFunction (/var/task/framework.js:95:19) at processTicksAndRejections (internal/process/task_queues.js:93:5) at async onEvent (/var/task/framework.js:19:27) at async Runtime.handler (/var/task/cfn-response.js:48:13) (RequestId: fe2c4c04-4de9-4a71-b18a-ab5bc91d180a)

The CDK EKS docs say that addHelmChart will install the provided Helm Chart with helm upgrade --install.

The AWS Load Balancer Controller installation instructions also say:

Install the TargetGroupBinding CRDs if upgrading the chart via helm upgrade.

kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"

I'm not sure how I can do this part in CDK. The link to github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master gives a 404, but that command does work when I run it against my EKS cluster, and I can install those CRDs. Running the deploy command after manually installing those CRDs also fails with the same message.

I think the error in my code comes from the HelmChartOptions that I pass to the addHelmChart command, and I have tried several options, and referenced similar CDK projects that install Helm charts from the same repo, but I keep getting failures.

Is anyone else installing the AWS Load Balancer Controller with CDK like I am trying to do here? One project that I have been trying to reference is https://github.com/neilkuan/cdk8s-aws-load-balancer-controller.

There is also discussion in this GitHub issue: #8836 that might help as well, but a lot of the discussion is around cert manager manager which doesn't seem to be relevant for what I'm trying to do.

@lkr2des
Copy link

lkr2des commented May 30, 2021

@briancaffey I ran into the similar issue of unable to get ALB(without cert manager) installation working with CDK helm due to targetgroup binding crds dependency.I couldn't get the target group CRDs installation working via CDK.But manually as you've stated target group crds worked for me too.ALB(without cert manager) using add_manifest method worked though as I couldn't find an similar equivalent of "helm upgrade" in CDK. Because per documentation this one when executed manually worked "helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller
--set clusterName=cluster-name
--set serviceAccount.create=false
--set serviceAccount.name=aws-load-balancer-controller
-n kube-system"

@micheal-hill
Copy link

micheal-hill commented May 30, 2021

I have a working CDK deployment of ALB with fargate:

// --- based on https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
import rawControllerPolicy from './alb-controller.policy__v2.1.0.json';
const albCustomResourceDef = await import.meta.resolve('../vendor/aws-alb-crds/crds.yaml')
            .then(loadYamlFromURL);

const ALB_SERVICE_ACCOUNT = 'aws-load-balancer-controller';

// -- step 4
const serviceAccountSubject = new cdk.CfnJson(this, 'account-subject', {
    value: {
        [`${cluster.clusterOpenIdConnectIssuer}:sub`]: [
            'system:serviceaccount:kube-system:aws-load-balancer-controller'
        ]
    }
});

const albAccountRole = new iam.CfnRole(this, 'alb-service-account', {
    roleName: `${config.platform.name}-AmazonEKSLoadBalancerControllerRole`,
    assumeRolePolicyDocument: {
        Statement: [
            {
                Effect: 'Allow',
                Action: 'sts:AssumeRoleWithWebIdentity',
                Principal: {
                    Federated: cluster.openIdConnectProvider.openIdConnectProviderArn
                },
                Condition: {
                    StringEquals: serviceAccountSubject
                }
            }
        ]
    },
    policies: [{
        // -- step 2-3
        policyName: 'ALBIngressControllerIAMPolicy',
        policyDocument: iam.PolicyDocument.fromJson(rawControllerPolicy)
    }]
});

new KubernetesManifest(this, 'alb-custom-resource-definition', {
    cluster,
    manifest: [albCustomResourceDef],
    overwrite: true,
    prune: true
});

// step 6
new HelmChart(this, 'alb-ingress-controller', {
    cluster,
    chart: 'aws-load-balancer-controller',
    version: config.alb.chartVersion,
    repository: 'https://aws.github.io/eks-charts',
    namespace: fargateNamespaces['kube-system'],
    values: {
        clusterName: cluster.clusterName,
        rbac: { create: true },
        serviceAccount: {
            create: true,
            name: ALB_SERVICE_ACCOUNT,
            annotations: {
                'eks.amazonaws.com/role-arn': albAccountRole.attrArn
            },
        },
        vpcId: cluster.vpc.vpcId,
        region: config.aws.region,
        logLevel: 'debug',
    }
});

Hopefully the missing pieces from this are self explanatory. I'd like to write this up properly to share when I have the time, because it was quite a lot of trial/error to get here!
(if there's anything missing, please ping and I'll do my best to fill in the missing pieces)

@briancaffey
Copy link

@micheal-hill Thank you very much for sharing. I'm going to try this out and hopefully move past these errors I have been having installing the AWS Load Balancer Controller. I have had a lot of failed pipelines and it is pretty discouraging but I would really like to figure this out and hopefully contribute some documentation about how to install this controller with a vanilla EKS setup using CDK. I may post here later if I can't get it working using your code sample as a reference.

Sidenote -- I have been having trouble using Fargate Profiles with EKS. I keep seeing the CoreDNS pods are pending. I'm not sure if that is an issue you have also faced. For now I'm using the default 2 node setup as I'm experimenting with EKS.

@briancaffey
Copy link

briancaffey commented May 31, 2021

@micheal-hill I think I was able to get the CRDs deployed thanks to the code you shared, but I'm still getting issues installing the Helm chart. I posted an issue in the https://github.com/aws/eks-charts repo here aws/eks-charts#529 where I provided a full explanation of the issue I'm having with more detailed error messages. Am I missing anything here? In your script, it looks like you had some relative imports of YAML files. I tried using URLs to install the CRDs which I think I did successfully. I'm using the CRDs from here: https://raw.githubusercontent.com/aws/eks-charts/master/stable/aws-load-balancer-controller/crds/crds.yaml, and my HelmChart declaration looks very similar to yours, but I'm still experiencing the error.

Update: I have it working now. I had the wrong version. It should be version 1.2.0 of the Helm chart which install version 2.2.0 of AWS Load Balancer Controller.

@zacyang
Copy link

zacyang commented May 31, 2021

note eks-charts recently updated version of aws-loadbalance-controller image as well. see https://github.com/aws/eks-charts/pull/519/files , additional permission will also be required.

@zxkane
Copy link
Contributor Author

zxkane commented Jun 1, 2021

For whom is interesting deploying ALB into EKS via CDK, you can refer to the implementation of below solution,

https://github.com/aws-samples/nexus-oss-on-aws/blob/d3a092d72041b65ca1c09d174818b513594d3e11/src/lib/sonatype-nexus3-stack.ts#L207-L242

@iliapolo iliapolo removed their assignment Jun 27, 2021
@mergify mergify bot closed this as completed in #17618 Nov 22, 2021
mergify bot pushed a commit that referenced this issue Nov 22, 2021
Add support for deploying the [AWS ALB Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/) onto the cluster. 

Resolves #8836

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

TikiTDO pushed a commit to TikiTDO/aws-cdk that referenced this issue Feb 21, 2022
Add support for deploying the [AWS ALB Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/) onto the cluster. 

Resolves aws#8836

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service effort/medium Medium work item – several days of effort feature-request A feature should be added or improved. p1
Projects
None yet
Development

Successfully merging a pull request may close this issue.