-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[aws-eks] AWS load balancer controller support #8836
Comments
Definitely need this one. AFAIK, you have to use eks.Cluster and not eks.FargateCluster without this because the Nginx one makes an ELB that doesn't work with Fargate pods. Did I miss something? |
👍 This would be a fantastic construct to include, I have just spent a painful day rolling my own |
@zxkane This is great - would you be interested in creating a PR with your proposed solution? |
Will do it. |
Here is a version I updated for the latest chart. export interface IAlbIngressControllerProps {
readonly cluster: eks.ICluster
readonly vpcId: string
readonly region: string
}
export class AlbIngressController extends cdk.Construct {
constructor(scope: cdk.Construct, id: string, props: IAlbIngressControllerProps) {
super(scope, id)
// https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/v2_0_1_full.yaml
const albBaseResourceBaseUrl = `https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/`
const albIngressControllerPolicyUrl = `${albBaseResourceBaseUrl}iam_policy.json`
const albNamespace = 'kube-system'
const albServiceAccount = props.cluster.addServiceAccount('alb-ingress-controller', {
name: 'alb-ingress-controller',
namespace: albNamespace,
})
const policyJson = request('GET', albIngressControllerPolicyUrl).getBody('utf8');
((JSON.parse(policyJson))['Statement'] as any[]).forEach(statement => {
albServiceAccount.addToPolicy(iam.PolicyStatement.fromJson(statement))
})
const albDeployment = yaml.safeLoadAll(request('GET', `${albBaseResourceBaseUrl}v2_0_1_full.yaml`).getBody('utf8'))
const albResources = props.cluster.addManifest('aws-alb-ingress-controller', ...albDeployment)
const albResourcePatch = new eks.KubernetesPatch(this, `alb-ingress-controller-patch`, {
cluster: props.cluster,
resourceName: 'deployment/alb-ingress-controller',
resourceNamespace: albNamespace,
applyPatch: {
spec: {
template: {
spec: {
containers: [
{
name: 'alb-ingress-controller',
args: [
'--ingress-class=alb',
'--feature-gates=wafv2=false',
`--cluster-name=${props.cluster.clusterName}`,
`--aws-vpc-id=${props.vpcId}`,
`--aws-region=${props.region}`,
]
}
]
}
}
}
},
restorePatch: {
spec: {
template: {
spec: {
containers: [
{
name: 'alb-ingress-controller',
args: [
'--ingress-class=alb',
'--feature-gates=wafv2=false',
`--cluster-name=${props.cluster.clusterName}`,
]
}
]
}
}
}
},
})
albResourcePatch.node.addDependency(albResources)
}
} note it has a dependency on cert-manager so add a dependency on this: const certManagerChart = new eks.HelmChart(this,
'cert-manager',
{
cluster,
createNamespace: true,
namespace: 'cert-manager',
repository: 'https://charts.jetstack.io',
chart: 'cert-manager',
release: 'cert-manager',
values: {
// https://github.com/jetstack/cert-manager/blob/master/deploy/charts/cert-manager/values.yaml
installCRDs: true,
},
version: 'v1.1.0'
})
albController.node.addDependency(certManagerChart) |
This might make a good starting point: https://www.npmjs.com/package/cdk8s-aws-load-balancer-controller |
We had the same issue and unfortunately the HelmChart part for Cert Manager did not work for us. Here is a version we made which works fine using manifests on official documentation but can be improved of course. Versions:
Alb Ingress Controller deployment private deployAlbIngressController() {
const certManagerResult = this.awsCertManagerService.deployCertManager(this.cluster);
const albIngressControllerProps = {
cluster: this.cluster,
region: this.conf.provisionConf.awsRegion,
vpcId: this.conf.provisionConf.awsVpcIntegration?.vpcId,
platformName: this.conf.platformName,
deploymentDir: this.deploymentDir,
waitCondition: certManagerResult.waitCondition
};
new AwsAlbIngressController(this.cluster, CDKNamingUtil.k8sALBIngressController(this.conf.platformName), albIngressControllerProps);
} AwsCertManagerService.ts import {log} from "../log";
import * as styles from "../styles";
import * as fs from "fs";
import * as path from "path";
import * as jsYaml from "js-yaml";
import * as eks from "@aws-cdk/aws-eks";
import CDKNamingUtil from "../util/CDKNamingUtil";
import {Configuration} from "../configuration";
import {AwsPlatform} from "../model/Configuration";
import * as cdk from "@aws-cdk/core";
export interface DeployCertManagerResult {
cdkManifests: eks.KubernetesManifest[];
waitCondition: cdk.CfnWaitCondition;
}
interface K8sManifestJson {
kind: string;
metadata: {
name: string;
};
}
interface ManifestGroup {
manifests: K8sManifestJson[];
size: number;
}
export default class AwsCertManagerService {
deploymentDir: string;
conf: Configuration<AwsPlatform>;
constructor(conf: Configuration<AwsPlatform>, deploymentDir: string) {
this.conf = conf;
this.deploymentDir = deploymentDir;
}
/*
* Returns the parent Construct so we can depend on it when deploying ALB Ingress Controller
*/
deployCertManager(cluster: eks.Cluster): DeployCertManagerResult {
log(styles.title(`*** Deploying Kubernetes cert-manager ***`))
// https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/deploy/installation/
const certManagerManifest = fs.readFileSync(path.join(this.deploymentDir, 'resources', 'alb-ingress-controller', 'cert-manager-v1.1.0.yaml'), {encoding: 'utf8'});
const manifests: K8sManifestJson[] = jsYaml.loadAll(certManagerManifest);
const groups: ManifestGroup[] = this.splitManifestsInGroups(manifests);
groups.forEach((group, groupIndex) => {
console.log(`cert-manager manifests group ${groupIndex}: ${group.manifests.length} manifests, size: ${group.size} bytes `)
});
const cdkManifests = groups.map((group, groupIndex) => {
return new eks.KubernetesManifest(cluster, `${CDKNamingUtil.k8sCertManager(this.conf.platformName)}-part-${groupIndex}`, {
cluster: cluster,
manifest: group.manifests,
overwrite: true
});
});
// Define a wait condition and handle for cert manager to be fully deployed
const waitConditionHandle = new cdk.CfnWaitConditionHandle(cluster, CDKNamingUtil.k8sCertManagerWaitConditionHandle(this.conf.platformName));
const waitCondition = new cdk.CfnWaitCondition(cluster, CDKNamingUtil.k8sCertManagerWaitCondition(this.conf.platformName), {
count: 1,
handle: waitConditionHandle.ref,
timeout: '600',
});
for (let certManagerManifest of cdkManifests) {
waitConditionHandle.node.addDependency(certManagerManifest);
}
const certManagerWaitConditionSignal = cluster.addManifest(CDKNamingUtil.k8sCertManagerWaitConditionSignal(this.conf.platformName), {
kind: "Pod",
apiVersion: "v1",
metadata: {
name: CDKNamingUtil.k8sCertManagerWaitConditionSignal(this.conf.platformName),
namespace: "default"
},
spec: {
initContainers:
[{
name: "wait-cert-manager-service",
image: "busybox:1.28",
command: ['sh', '-c', 'echo begin sleep && sleep 60 && echo end sleep']
}],
containers:
[{
name: "cert-manager-waitcondition-signal",
image: "curlimages/curl:7.74.0",
args: [
'-vvv',
'-X',
'PUT',
'-H', 'Content-Type:',
'--data-binary', '{"Status" : "SUCCESS","Reason" : "Configuration Complete", "UniqueId" : "ID1234", "Data" : "Cert manager should be ready by now."}',
waitConditionHandle.ref
]
}],
restartPolicy: "Never"
}
})
certManagerWaitConditionSignal.node.addDependency(waitConditionHandle)
return {
cdkManifests,
waitCondition
};
}
private splitManifestsInGroups(manifests: K8sManifestJson[]): ManifestGroup[] {
// Max payload size for CloudFormation event is 262144 bytes
// (we got that information from an error message, not from the doc)
const maxGroupSize = Math.floor(262144 * .8)
const groups: ManifestGroup[] = []
// Splitting all manifest in groups so total size of group is less than 262144 bytes
manifests.forEach(manifest => {
const manifestSize = JSON.stringify(manifest).length;
console.log(`cert-manager manifest '${manifest.kind}/${manifest?.metadata?.name}' size is ${manifestSize} characters`);
const lastGroup = (groups.length && groups[groups.length - 1]) || null;
if (lastGroup === null || (lastGroup.size + manifestSize) > maxGroupSize) {
groups.push({
manifests: [manifest],
size: manifestSize
});
} else {
lastGroup.manifests.push(manifest);
lastGroup.size += manifestSize;
}
});
return groups;
}
} AwsAlbIngressController .ts import * as cdk from "@aws-cdk/core";
import * as eks from "@aws-cdk/aws-eks";
import * as iam from "@aws-cdk/aws-iam";
import * as jsYaml from "js-yaml";
import * as fs from "fs";
import * as path from "path";
import CDKNamingUtil from "../util/CDKNamingUtil";
export interface IAlbIngressControllerProps {
readonly cluster: eks.Cluster;
readonly vpcId?: string;
readonly region: string;
readonly deploymentDir: string;
readonly platformName: string;
readonly waitCondition: cdk.CfnWaitCondition;
}
const AWS_LOAD_BALANCER_CONTROLLER = 'aws-load-balancer-controller';
export class AwsAlbIngressController extends cdk.Construct {
constructor(scope: cdk.Construct, id: string, props: IAlbIngressControllerProps) {
super(scope, id);
// If stack is deployed again, make sure this service is well deleted (see inside Lens tool)
const albNamespace = 'kube-system';
const albServiceAccount = props.cluster.addServiceAccount(AWS_LOAD_BALANCER_CONTROLLER, {
name: AWS_LOAD_BALANCER_CONTROLLER,
namespace: albNamespace
});
const policy: { Statement: any[] } = JSON.parse(fs.readFileSync(path.join(props.deploymentDir, 'resources', 'alb-ingress-controller', 'iam-policy.json'), {encoding: 'utf8'}));
policy.Statement.forEach(statement => albServiceAccount.addToPrincipalPolicy(iam.PolicyStatement.fromJson(statement)))
let albManifest = fs.readFileSync(path.join(props.deploymentDir, 'resources', 'alb-ingress-controller', 'alb-ingress-controller-v2.1.0.yaml'), {encoding: 'utf8'});
albManifest = albManifest.replace(/your-cluster-name/g, CDKNamingUtil.kubernetesClusterName(props.platformName));
const ingressControllerManifest = new eks.KubernetesManifest(this, CDKNamingUtil.k8sALBIngressController(props.platformName), {
cluster: props.cluster,
manifest: jsYaml.loadAll(albManifest),
overwrite: true
});
ingressControllerManifest.node.addDependency(props.waitCondition);
}
} We had to update manifests because of original descriptions formatting not readabled as is by CloudFormation |
@BriceGestas how did you create the OIDC provider in CDK? |
Hello @vsetka , Actually I did not create OIDC provider myself. This is done via the addServiceAccount method from cluster. If you look at the source code of CDK: public addServiceAccount(id: string, options: ServiceAccountOptions = {}): ServiceAccount {
return new ServiceAccount(this, id, {
...options,
cluster: this,
});
} And the constructor of ServiceAccount is constructor(scope: Construct, id: string, props: ServiceAccountProps) {
super(scope, id);
const { cluster } = props;
this.serviceAccountName = props.name ?? Names.uniqueId(this).toLowerCase();
this.serviceAccountNamespace = props.namespace ?? 'default';
/* Add conditions to the role to improve security. This prevents other pods in the same namespace to assume the role.
* See documentation: https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html
*/
const conditions = new CfnJson(this, 'ConditionJson', {
value: {
[`${cluster.openIdConnectProvider.openIdConnectProviderIssuer}:aud`]: 'sts.amazonaws.com',
[`${cluster.openIdConnectProvider.openIdConnectProviderIssuer}:sub`]: `system:serviceaccount:${this.serviceAccountNamespace}:${this.serviceAccountName}`,
},
});
const principal = new OpenIdConnectPrincipal(cluster.openIdConnectProvider).withConditions({
StringEquals: conditions,
});
this.role = new Role(this, 'Role', { assumedBy: principal });
this.assumeRoleAction = this.role.assumeRoleAction;
this.grantPrincipal = this.role.grantPrincipal;
this.policyFragment = this.role.policyFragment;
} |
@BriceGestas I am trying to get cert-manager installed via opencdk8s lb chart and probably running into same issue of some limitation where my stack hangs.I came across this issue.What you've done is something I would like to do, but our code is in python and I am not familiar with TS.Do you have any example of how the same implementation of what you've wrote could be achieved via python.Thanks |
@BriceGestas what you got is a decent solution (I haven't ried) but I did have an issue where the cert-manage.yaml is 1.8mb where hangs on my stack, possible cause was the file size. |
Sorry, I am not a Python code writer so I won't be able to help you unfortunately... |
Hello @zacyang |
For the unfortunate soul got the same issue what @BriceGestas referred is here https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-waitcondition.html Also tried this morning, it can install cert-manager, however, for some reasons, the installed cert-manger are not working correctly, still digging out why. Could be the sequance is messed up |
I posted a SO question related to this: https://stackoverflow.com/questions/67762104/how-to-install-aws-load-balancer-controller-in-a-cdk-project I have also been trying to do this. I am trying to write a high-level CDK construct that can be used to deploy Django applications with EKS. I have most of the k8s manifests defined for the application, but I am struggling with the Ingress part. Looking into different options, I have decided to try installing the AWS Load Balancer Controller (https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/installation/). Their documentation has instructions for installing the controller using the AWS CLI and the eksctl CLI tool, so I'm working on trying to translate these into CDK code. Here's what I have so far: import * as ec2 from '@aws-cdk/aws-ec2';
import * as eks from '@aws-cdk/aws-eks';
import * as iam from '@aws-cdk/aws-iam';
import * as cdk from '@aws-cdk/core';
import { ApplicationVpc } from './vpc';
var request = require('sync-request');
export interface DjangoEksProps {
readonly vpc: ec2.IVpc;
}
export class DjangoEks extends cdk.Construct {
public vpc: ec2.IVpc;
public cluster: eks.Cluster;
constructor(scope: cdk.Construct, id: string, props: DjangoEksProps) {
super(scope, id);
this.vpc = props.vpc;
// allow all account users to assume this role in order to admin the cluster
const mastersRole = new iam.Role(this, 'AdminRole', {
assumedBy: new iam.AccountRootPrincipal(),
});
this.cluster = new eks.Cluster(this, "MyEksCluster", {
version: eks.KubernetesVersion.V1_19,
vpc: this.vpc,
mastersRole,
defaultCapacity: 2,
});
// Adopted from comments in this issue: https://github.com/aws/aws-cdk/issues/8836
const albServiceAccount = this.cluster.addServiceAccount('aws-alb-ingress-controller-sa', {
name: 'aws-load-balancer-controller',
namespace: 'kube-system',
});
const awsAlbControllerPolicyUrl = 'https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.2.0/docs/install/iam_policy.json';
const policyJson = request('GET', awsAlbControllerPolicyUrl).getBody('utf8');
((JSON.parse(policyJson))['Statement'] as any[]).forEach(statement => {
albServiceAccount.addToPrincipalPolicy(iam.PolicyStatement.fromJson(statement))
})
// This is where I am stuck
// https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/installation/#add-controller-to-cluster
// I tried running this
// kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"
this.cluster.addHelmChart('aws-load-balancer-controller-helm-chart', {
repository: 'https://aws.github.io/eks-charts',
chart: 'eks/aws-load-balancer-controller',
release: 'aws-load-balancer-controller',
version: '2.2.0',
namespace: 'kube-system',
values: {
clusterName: this.cluster.clusterName,
serviceAccount: {
create: false,
name: 'aws-load-balancer-controller',
},
},
});
}
} Here are the errors I am seeing in CDK when I do
and a related error:
The CDK EKS docs say that The AWS Load Balancer Controller installation instructions also say:
I'm not sure how I can do this part in CDK. The link to I think the error in my code comes from the Is anyone else installing the AWS Load Balancer Controller with CDK like I am trying to do here? One project that I have been trying to reference is https://github.com/neilkuan/cdk8s-aws-load-balancer-controller. There is also discussion in this GitHub issue: #8836 that might help as well, but a lot of the discussion is around cert manager manager which doesn't seem to be relevant for what I'm trying to do. |
@briancaffey I ran into the similar issue of unable to get ALB(without cert manager) installation working with CDK helm due to targetgroup binding crds dependency.I couldn't get the target group CRDs installation working via CDK.But manually as you've stated target group crds worked for me too.ALB(without cert manager) using add_manifest method worked though as I couldn't find an similar equivalent of "helm upgrade" in CDK. Because per documentation this one when executed manually worked "helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller |
I have a working CDK deployment of ALB with fargate: // --- based on https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
import rawControllerPolicy from './alb-controller.policy__v2.1.0.json';
const albCustomResourceDef = await import.meta.resolve('../vendor/aws-alb-crds/crds.yaml')
.then(loadYamlFromURL);
const ALB_SERVICE_ACCOUNT = 'aws-load-balancer-controller';
// -- step 4
const serviceAccountSubject = new cdk.CfnJson(this, 'account-subject', {
value: {
[`${cluster.clusterOpenIdConnectIssuer}:sub`]: [
'system:serviceaccount:kube-system:aws-load-balancer-controller'
]
}
});
const albAccountRole = new iam.CfnRole(this, 'alb-service-account', {
roleName: `${config.platform.name}-AmazonEKSLoadBalancerControllerRole`,
assumeRolePolicyDocument: {
Statement: [
{
Effect: 'Allow',
Action: 'sts:AssumeRoleWithWebIdentity',
Principal: {
Federated: cluster.openIdConnectProvider.openIdConnectProviderArn
},
Condition: {
StringEquals: serviceAccountSubject
}
}
]
},
policies: [{
// -- step 2-3
policyName: 'ALBIngressControllerIAMPolicy',
policyDocument: iam.PolicyDocument.fromJson(rawControllerPolicy)
}]
});
new KubernetesManifest(this, 'alb-custom-resource-definition', {
cluster,
manifest: [albCustomResourceDef],
overwrite: true,
prune: true
});
// step 6
new HelmChart(this, 'alb-ingress-controller', {
cluster,
chart: 'aws-load-balancer-controller',
version: config.alb.chartVersion,
repository: 'https://aws.github.io/eks-charts',
namespace: fargateNamespaces['kube-system'],
values: {
clusterName: cluster.clusterName,
rbac: { create: true },
serviceAccount: {
create: true,
name: ALB_SERVICE_ACCOUNT,
annotations: {
'eks.amazonaws.com/role-arn': albAccountRole.attrArn
},
},
vpcId: cluster.vpc.vpcId,
region: config.aws.region,
logLevel: 'debug',
}
}); Hopefully the missing pieces from this are self explanatory. I'd like to write this up properly to share when I have the time, because it was quite a lot of trial/error to get here! |
@micheal-hill Thank you very much for sharing. I'm going to try this out and hopefully move past these errors I have been having installing the AWS Load Balancer Controller. I have had a lot of failed pipelines and it is pretty discouraging but I would really like to figure this out and hopefully contribute some documentation about how to install this controller with a vanilla EKS setup using CDK. I may post here later if I can't get it working using your code sample as a reference. Sidenote -- I have been having trouble using Fargate Profiles with EKS. I keep seeing the CoreDNS pods are pending. I'm not sure if that is an issue you have also faced. For now I'm using the default 2 node setup as I'm experimenting with EKS. |
@micheal-hill I think I was able to get the CRDs deployed thanks to the code you shared, but I'm still getting issues installing the Helm chart. I posted an issue in the https://github.com/aws/eks-charts repo here aws/eks-charts#529 where I provided a full explanation of the issue I'm having with more detailed error messages. Am I missing anything here? In your script, it looks like you had some relative imports of YAML files. I tried using URLs to install the CRDs which I think I did successfully. I'm using the CRDs from here: https://raw.githubusercontent.com/aws/eks-charts/master/stable/aws-load-balancer-controller/crds/crds.yaml, and my HelmChart declaration looks very similar to yours, but I'm still experiencing the error. Update: I have it working now. I had the wrong version. It should be version 1.2.0 of the Helm chart which install version 2.2.0 of AWS Load Balancer Controller. |
note eks-charts recently updated version of aws-loadbalance-controller image as well. see https://github.com/aws/eks-charts/pull/519/files , additional permission will also be required. |
For whom is interesting deploying ALB into EKS via CDK, you can refer to the implementation of below solution, |
Add support for deploying the [AWS ALB Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/) onto the cluster. Resolves #8836 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
|
Add support for deploying the [AWS ALB Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/) onto the cluster. Resolves aws#8836 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
It's a common use case to deploy ALB ingress controller on EKS, it would be helpful to support it in L2 class level.
Use Case
Deploy ALB ingress controller for using ALB to deploy ingress of K8S.
Proposed Solution
Might implement a new L2 class
ALBIngressController
like below,Other
This is a 🚀 Feature Request
The text was updated successfully, but these errors were encountered: