Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(eks): connect all custom resources to the cluster VPC #10200

Merged
merged 13 commits into from
Dec 21, 2020
21 changes: 16 additions & 5 deletions packages/@aws-cdk/aws-eks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -433,6 +433,8 @@ new eks.Cluster(this, 'HelloEKS', {
});
```

> Note: Isolated VPCs (i.e with no internet access) are not currently supported. See https://github.com/aws/aws-cdk/issues/12171

If you do not specify a VPC, one will be created on your behalf, which you can then access via `cluster.vpc`. The cluster VPC will be associated to any EKS managed capacity (i.e Managed Node Groups and Fargate Profiles).

If you allocate self managed capacity, you can specify which subnets should the auto-scaling group use:
Expand All @@ -444,8 +446,7 @@ cluster.addAutoScalingGroupCapacity('nodes', {
});
```

In addition to the cluster and the capacity, there are two additional components you might want to
provision within a VPC.
There are two additional components you might want to provision within the VPC.

#### Kubectl Handler

Expand All @@ -459,7 +460,18 @@ If the endpoint does not expose private access (via `EndpointAccess.PUBLIC`) **o

#### Cluster Handler

The `ClusterHandler` is a Lambda function responsible to interact the EKS API in order to control the cluster lifecycle. At the moment, this function cannot be provisioned inside the VPC. See [Attach all Lambda Function to a VPC](https://github.com/aws/aws-cdk/issues/9509) for more details.
The `ClusterHandler` is a Lambda function responsible to interact with the EKS API in order to control the cluster lifecycle. To provision this function inside the VPC, set the `placeClusterHandlerInVpc` property to `true`. This will place the function inside the private subnets of the VPC based on the selection strategy specified in the [`vpcSubnets`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-eks.Cluster.html#vpcsubnetsspan-classapi-icon-api-icon-experimental-titlethis-api-element-is-experimental-it-may-change-without-noticespan) property.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about the kubectl handler?


You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:

```ts
const cluster = new eks.Cluster(this, 'hello-eks', {
version: eks.KubernetesVersion.V1_18,
clusterHandlerEnvironment: {
'http_proxy': 'http://proxy.myproxy.com'
}
});
```

### Kubectl Support

Expand Down Expand Up @@ -1122,6 +1134,5 @@ Kubernetes [endpoint access](#endpoint-access), you must also specify:
## Known Issues and Limitations

* [One cluster per stack](https://github.com/aws/aws-cdk/issues/10073)
* [Object pruning](https://github.com/aws/aws-cdk/issues/10495)
* [Service Account dependencies](https://github.com/aws/aws-cdk/issues/9910)
* [Attach all Lambda Functions to VPC](https://github.com/aws/aws-cdk/issues/9509)
* [Support isolated VPCs](https://github.com/aws/aws-cdk/issues/12171)
23 changes: 23 additions & 0 deletions packages/@aws-cdk/aws-eks/lib/cluster-resource-provider.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import * as path from 'path';
import * as ec2 from '@aws-cdk/aws-ec2';
import * as iam from '@aws-cdk/aws-iam';
import * as lambda from '@aws-cdk/aws-lambda';
import { Duration, NestedStack, Stack } from '@aws-cdk/core';
Expand All @@ -17,6 +18,21 @@ export interface ClusterResourceProviderProps {
* The IAM role to assume in order to interact with the cluster.
*/
readonly adminRole: iam.IRole;

/**
* The VPC to provision the functions in.
*/
readonly vpc?: ec2.IVpc;

/**
* The subnets to place the functions in.
*/
readonly subnets?: ec2.ISubnet[];

/**
* Environment to add to the handler.
*/
readonly environment?: { [key: string]: string };
}

/**
Expand Down Expand Up @@ -46,8 +62,11 @@ export class ClusterResourceProvider extends NestedStack {
code: lambda.Code.fromAsset(HANDLER_DIR),
description: 'onEvent handler for EKS cluster resource provider',
runtime: HANDLER_RUNTIME,
environment: props.environment,
handler: 'index.onEvent',
timeout: Duration.minutes(1),
vpc: props.subnets ? props.vpc : undefined,
vpcSubnets: props.subnets ? { subnets: props.subnets } : undefined,
});

const isComplete = new lambda.Function(this, 'IsCompleteHandler', {
Expand All @@ -56,13 +75,17 @@ export class ClusterResourceProvider extends NestedStack {
runtime: HANDLER_RUNTIME,
handler: 'index.isComplete',
timeout: Duration.minutes(1),
vpc: props.subnets ? props.vpc : undefined,
vpcSubnets: props.subnets ? { subnets: props.subnets } : undefined,
});

this.provider = new cr.Provider(this, 'Provider', {
onEventHandler: onEvent,
isCompleteHandler: isComplete,
totalTimeout: Duration.hours(1),
queryInterval: Duration.minutes(1),
vpc: props.subnets ? props.vpc : undefined,
vpcSubnets: props.subnets ? { subnets: props.subnets } : undefined,
});

props.adminRole.grant(onEvent.role!, 'sts:AssumeRole');
Expand Down
5 changes: 5 additions & 0 deletions packages/@aws-cdk/aws-eks/lib/cluster-resource.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,8 @@ export interface ClusterResourceProps {
readonly endpointPublicAccess: boolean;
readonly publicAccessCidrs?: string[];
readonly vpc: ec2.IVpc;
readonly environment?: { [key: string]: string };
readonly subnets?: ec2.ISubnet[];
readonly secretsEncryptionKey?: kms.IKey;
}

Expand Down Expand Up @@ -57,6 +59,9 @@ export class ClusterResource extends CoreConstruct {

const provider = ClusterResourceProvider.getOrCreate(this, {
adminRole: this.adminRole,
subnets: props.subnets,
vpc: props.vpc,
environment: props.environment,
});

const resource = new CustomResource(this, 'Resource', {
Expand Down
24 changes: 23 additions & 1 deletion packages/@aws-cdk/aws-eks/lib/cluster.ts
Original file line number Diff line number Diff line change
Expand Up @@ -428,6 +428,13 @@ export interface ClusterOptions extends CommonClusterOptions {
*/
readonly kubectlEnvironment?: { [key: string]: string };

/**
* Custom environment variables when interacting with the EKS endpoint to manage the cluster lifecycle.
*
* @default - No environment variables.
*/
readonly clusterHandlerEnvironment?: { [key: string]: string };

/**
* An AWS Lambda Layer which includes `kubectl`, Helm and the AWS CLI.
*
Expand Down Expand Up @@ -468,6 +475,14 @@ export interface ClusterOptions extends CommonClusterOptions {
* @default true
*/
readonly prune?: boolean;

/**
* If set to true, the cluster handler functions will be placed in the private subnets
* of the cluster vpc, subject to the `vpcSubnets` selection strategy.
*
* @default false
*/
readonly placeClusterHandlerInVpc?: boolean;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would make sense to use a similar prefix:

Suggested change
readonly placeClusterHandlerInVpc?: boolean;
readonly clusterHandlerVpc?: boolean;

}

/**
Expand Down Expand Up @@ -859,7 +874,6 @@ export class Cluster extends ClusterBase {

/**
* Custom environment variables when running `kubectl` against this cluster.
* @default - no additional environment variables
*/
public readonly kubectlEnvironment?: { [key: string]: string };

Expand Down Expand Up @@ -1020,8 +1034,15 @@ export class Cluster extends ClusterBase {
throw new Error('Vpc must contain private subnets when public endpoint access is restricted');
}

const placeClusterHandlerInVpc = props.placeClusterHandlerInVpc ?? false;

if (placeClusterHandlerInVpc && privateSubents.length === 0) {
throw new Error('Cannot place cluster handler in the VPC since no private subnets could be selected');
}

const resource = this._clusterResource = new ClusterResource(this, 'Resource', {
name: this.physicalName,
environment: props.clusterHandlerEnvironment,
roleArn: this.role.roleArn,
version: props.version.version,
resourcesVpcConfig: {
Expand All @@ -1041,6 +1062,7 @@ export class Cluster extends ClusterBase {
publicAccessCidrs: this.endpointAccess._config.publicCidrs,
secretsEncryptionKey: props.secretsEncryptionKey,
vpc: this.vpc,
subnets: placeClusterHandlerInVpc ? privateSubents : undefined,
});

if (this.endpointAccess._config.privateAccess && privateSubents.length !== 0) {
Expand Down
2 changes: 2 additions & 0 deletions packages/@aws-cdk/aws-eks/lib/kubectl-provider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,8 @@ export class KubectlProvider extends NestedStack {

const provider = new cr.Provider(this, 'Provider', {
onEventHandler: handler,
vpc: cluster.kubectlPrivateSubnets ? cluster.vpc : undefined,
Copy link
Contributor Author

@iliapolo iliapolo Dec 15, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't related to the placeClusterHandlerInVpc property. But seems like we should have done this when we introduced private endpoints, consolidating all the related functions into the same network.

vpcSubnets: cluster.kubectlPrivateSubnets ? { subnets: cluster.kubectlPrivateSubnets } : undefined,
});

this.serviceToken = provider.serviceToken;
Expand Down
Loading