-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release public artifacts (lambda layers for custom resources, docker images) #39
Comments
Let's focus on S3 for Lambda bundles because that is a pertinent use case we have to reduce the side of the mono-module. Specifically, the aws-s3-deployment module currently needs to bundle the entire AWS CLI because it leverages The only value of using SAR here would be that they take care of replication but that's not a problem with S3 since S3 now supports cross-region replication. So the design approach would be to create a public S3 bucket in each region we support with a well-known name and cross-region replication. We will store .zip files for AWS Lambda layers there which will be mounted into custom resources of AWS constructs such as s3-deployment and EKS. To achieve this, we will add another type of artifact to our build system (e.g. add Note that this is susceptible to an eventual consistency issue with the publishing of the actual modules that reference the artifact, so perhaps it has to be the first publisher (i.e. a separate pipeline stage or run-order). The publisher will no-op if there is already a file in that location, which is aligned with how we publish (publishing only really happens when we actually bump the version and published artifacts are immutable). |
Agree. And let's include |
Let's also include |
Exactly for those reasons (no S3 replicating to non-commercial partitions, and keeping up with new region launches) -- that it's going to be a lot of work keeping up with users in all regions. We would be opting in to a somewhat large operational burden, and I don't really see the advantages outweighing that. |
I think we don’t have to do this as part of 2.0, but I suspect that the size of monocdk will eventually be a motivating factor to implement this. I agree it’s not trivial. |
The EKS module uses the AWS CLI, `kubectl` and `helm` in order to interact with the Kubernetes cluster. These tools were consumed from a SAR app maintained by @pahud as an AWS Sample (see [repo](https://github.com/aws-samples/aws-lambda-layer-kubectl)). This dependency on sample code introduces an operational and maintenance risk and as part of productizing the EKS module, we need to break it. The dependency on SAR is not required, and adds a few unnecessary layers (a nested stack, SAR regional availability, etc). To that end, this change bundles the AWS CLI and the Kubernetes tools (`kubectl` and `helm`) into the AWS CDK. These layers are maintained in two new CDK modules called `@aws-cdk/lambda-layer-awscli` and `@aws-cdk/lambda-layer-kubectl` respectively. These are normal CDK modules that export a `lambda.LayerVersion` resource that can be mounted to any AWS Lambda function. Since the s3-deployment module also needs the AWS CLI (and bundles it), we now reuse the AWS CLI layer in there as well. Module sizes: - lambda-layer-awscli: 10MiB - lambda-layer-kubectl: 24MiB This change increases the total module size of the MonoCDK by 24MiB (10MiB are reused with s3-deployment which was already bundled). In the future we are planning to remove these bundles from the library and publish them externally so they can be consumed at deploy-time but this is out of scope for this PR (see aws/aws-cdk-rfcs#39). Resolves #11874 BREAKING CHANGE: the `@aws-cdk/eks.KubectlLayer` layer class has been moved to `@aws-cdk/lambda-layer-kubectl.KubectlLayer`. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
) The EKS module uses the AWS CLI, `kubectl` and `helm` in order to interact with the Kubernetes cluster. These tools were consumed from a SAR app maintained by @pahud as an AWS Sample (see [repo](https://github.com/aws-samples/aws-lambda-layer-kubectl)). This dependency on sample code introduces an operational and maintenance risk and as part of productizing the EKS module, we need to break it. The dependency on SAR is not required, and adds a few unnecessary layers (a nested stack, SAR regional availability, etc). To that end, this change bundles the AWS CLI and the Kubernetes tools (`kubectl` and `helm`) into the AWS CDK. These layers are maintained in two new CDK modules called `@aws-cdk/lambda-layer-awscli` and `@aws-cdk/lambda-layer-kubectl` respectively. These are normal CDK modules that export a `lambda.LayerVersion` resource that can be mounted to any AWS Lambda function. Since the s3-deployment module also needs the AWS CLI (and bundles it), we now reuse the AWS CLI layer in there as well. Module sizes: - lambda-layer-awscli: 10MiB - lambda-layer-kubectl: 24MiB This change increases the total module size of the MonoCDK by 24MiB (10MiB are reused with s3-deployment which was already bundled). In the future we are planning to remove these bundles from the library and publish them externally so they can be consumed at deploy-time but this is out of scope for this PR (see aws/aws-cdk-rfcs#39). Resolves aws#11874 BREAKING CHANGE: the `@aws-cdk/eks.KubectlLayer` layer class has been moved to `@aws-cdk/lambda-layer-kubectl.KubectlLayer`. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
The EKS module uses the AWS CLI, `kubectl` and `helm` in order to interact with the Kubernetes cluster. These tools were consumed from a SAR app maintained by @pahud as an AWS Sample (see [repo](https://github.com/aws-samples/aws-lambda-layer-kubectl)). This dependency on sample code introduces an operational and maintenance risk and as part of productizing the EKS module, we need to break it. The dependency on SAR is not required, and adds a few unnecessary layers (a nested stack, SAR regional availability, etc). To that end, this change bundles the AWS CLI and the Kubernetes tools (`kubectl` and `helm`) into the AWS CDK. These layers are maintained in two new CDK modules called `@aws-cdk/lambda-layer-awscli` and `@aws-cdk/lambda-layer-kubectl` respectively. These are normal CDK modules that export a `lambda.LayerVersion` resource that can be mounted to any AWS Lambda function. Since the s3-deployment module also needs the AWS CLI (and bundles it), we now reuse the AWS CLI layer in there as well. Module sizes: - lambda-layer-awscli: 10MiB - lambda-layer-kubectl: 24MiB This change increases the total module size of the MonoCDK by 24MiB (10MiB are reused with s3-deployment which was already bundled). In the future we are planning to remove these bundles from the library and publish them externally so they can be consumed at deploy-time but this is out of scope for this PR (see aws/aws-cdk-rfcs#39). Resolves #11874 BREAKING CHANGE: the `@aws-cdk/eks.KubectlLayer` layer class has been moved to `@aws-cdk/lambda-layer-kubectl.KubectlLayer`. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Description
When the AWS construct library includes an asset such as custom resource providers, lambda bundles, docker images or static files, we currently bundle these into the library and then build and publish it to user accounts. Since the CDK is public and open-source, it should be possible to publish these assets to S3/ECR in all regions and consume them as public assets. This will reduce the size of our libraries and the time it takes for these constructs to be deployed.
Progress
The text was updated successfully, but these errors were encountered: