Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release public artifacts (lambda layers for custom resources, docker images) #39

Closed
6 of 7 tasks
eladb opened this issue Dec 10, 2019 · 5 comments
Closed
6 of 7 tasks
Labels
aws-construct-library Cross cutting features in the AWS Construct Library (across L2s) ops Operations status/done Implementation complete

Comments

@eladb
Copy link
Contributor

eladb commented Dec 10, 2019

PR Champion
#

Description

When the AWS construct library includes an asset such as custom resource providers, lambda bundles, docker images or static files, we currently bundle these into the library and then build and publish it to user accounts. Since the CDK is public and open-source, it should be possible to publish these assets to S3/ECR in all regions and consume them as public assets. This will reduce the size of our libraries and the time it takes for these constructs to be deployed.

Progress

  • Tracking Issue Created
  • RFC PR Created
  • Core Team Member Assigned
  • Initial Approval / Final Comment Period
  • Ready For Implementation
    • implementation issue 1
  • Resolved
@eladb eladb changed the title Public asset publishing (custom resources, docker images) Public assets (custom resources, docker images) Dec 10, 2019
@eladb eladb added aws-construct-library Cross cutting features in the AWS Construct Library (across L2s) ops Operations labels Dec 10, 2019
@MrArnoldPalmer MrArnoldPalmer added the status/proposed Newly proposed RFC label Jan 4, 2020
@eladb
Copy link
Contributor Author

eladb commented Mar 1, 2020

Let's focus on S3 for Lambda bundles because that is a pertinent use case we have to reduce the side of the mono-module. Specifically, the aws-s3-deployment module currently needs to bundle the entire AWS CLI because it leverages s3 sync. Another use case we already have is the aws-eks which currently uses an open source AWS Lambda Layer published as a SAR app which contains kubectl, helm and the AWS CLI.

The only value of using SAR here would be that they take care of replication but that's not a problem with S3 since S3 now supports cross-region replication.

So the design approach would be to create a public S3 bucket in each region we support with a well-known name and cross-region replication. We will store .zip files for AWS Lambda layers there which will be mounted into custom resources of AWS constructs such as s3-deployment and EKS.

To achieve this, we will add another type of artifact to our build system (e.g. add dist/s3 or something like that) and during publishing we will upload all the files from this directory to the replicated S3 bucket (with a key prefix that denotes the CDK version (e.g. s3://aws-cdk-files-eu-west-2/v1.22.0/awscli-layer.zip). We can also decide to just upload it to all the regional buckets. Shouldn't be too problematic and might be better because it will give us a synchronous confirmation that we are done.

Note that this is susceptible to an eventual consistency issue with the publishing of the actual modules that reference the artifact, so perhaps it has to be the first publisher (i.e. a separate pipeline stage or run-order).

The publisher will no-op if there is already a file in that location, which is aligned with how we publish (publishing only really happens when we actually bump the version and published artifacts are immutable).

@eladb eladb changed the title Public assets (custom resources, docker images) Public artifacts (custom resources, docker images) Mar 1, 2020
@eladb eladb changed the title Public artifacts (custom resources, docker images) Publish additional public artifacts (lambda layers for custom resources, docker images) Mar 1, 2020
@eladb eladb changed the title Publish additional public artifacts (lambda layers for custom resources, docker images) Release public artifacts (lambda layers for custom resources, docker images) Mar 1, 2020
@pahud
Copy link

pahud commented Mar 3, 2020

Agree. And let's include cn-north-1 and cn-northwest-1 S3 buckets in the pipeline so we can roll out the artifacts into AWS China regions.

@arhea
Copy link

arhea commented May 11, 2020

Let's also include us-gov-west-1 and us-gov-east-1 to cover our GovCloud customers.

@rix0rrr
Copy link
Contributor

rix0rrr commented Jul 1, 2020

Exactly for those reasons (no S3 replicating to non-commercial partitions, and keeping up with new region launches) -- that it's going to be a lot of work keeping up with users in all regions.

We would be opting in to a somewhat large operational burden, and I don't really see the advantages outweighing that.

@eladb
Copy link
Contributor Author

eladb commented Jul 2, 2020

I think we don’t have to do this as part of 2.0, but I suspect that the size of monocdk will eventually be a motivating factor to implement this.

I agree it’s not trivial.

mergify bot pushed a commit to aws/aws-cdk that referenced this issue Dec 24, 2020
The EKS module uses the AWS CLI, `kubectl` and `helm` in order to interact with the Kubernetes cluster. These tools were consumed from a SAR app maintained by @pahud as an AWS Sample (see [repo](https://github.com/aws-samples/aws-lambda-layer-kubectl)).

This dependency on sample code introduces an operational and maintenance risk and as part of productizing the EKS module, we need to break it. The dependency on SAR is not required, and adds a few unnecessary layers (a nested stack, SAR regional availability, etc).

To that end, this change bundles the AWS CLI and the Kubernetes tools (`kubectl` and `helm`) into the AWS CDK. These layers are maintained in two new CDK modules called `@aws-cdk/lambda-layer-awscli` and `@aws-cdk/lambda-layer-kubectl` respectively. These are normal CDK modules that export a `lambda.LayerVersion` resource that can be mounted to any AWS Lambda function.

Since the s3-deployment module also needs the AWS CLI (and bundles it), we now reuse the AWS CLI layer in there as well.

Module sizes:
- lambda-layer-awscli: 10MiB
- lambda-layer-kubectl: 24MiB

This change increases the total module size of the MonoCDK by 24MiB (10MiB are reused with s3-deployment which was already bundled). In the future we are planning to remove these bundles from the library and publish them externally so they can be consumed at deploy-time but this is out of scope for this PR (see aws/aws-cdk-rfcs#39).



Resolves #11874

BREAKING CHANGE: the `@aws-cdk/eks.KubectlLayer` layer class has been moved to `@aws-cdk/lambda-layer-kubectl.KubectlLayer`.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
flochaz pushed a commit to flochaz/aws-cdk that referenced this issue Jan 5, 2021
)

The EKS module uses the AWS CLI, `kubectl` and `helm` in order to interact with the Kubernetes cluster. These tools were consumed from a SAR app maintained by @pahud as an AWS Sample (see [repo](https://github.com/aws-samples/aws-lambda-layer-kubectl)).

This dependency on sample code introduces an operational and maintenance risk and as part of productizing the EKS module, we need to break it. The dependency on SAR is not required, and adds a few unnecessary layers (a nested stack, SAR regional availability, etc).

To that end, this change bundles the AWS CLI and the Kubernetes tools (`kubectl` and `helm`) into the AWS CDK. These layers are maintained in two new CDK modules called `@aws-cdk/lambda-layer-awscli` and `@aws-cdk/lambda-layer-kubectl` respectively. These are normal CDK modules that export a `lambda.LayerVersion` resource that can be mounted to any AWS Lambda function.

Since the s3-deployment module also needs the AWS CLI (and bundles it), we now reuse the AWS CLI layer in there as well.

Module sizes:
- lambda-layer-awscli: 10MiB
- lambda-layer-kubectl: 24MiB

This change increases the total module size of the MonoCDK by 24MiB (10MiB are reused with s3-deployment which was already bundled). In the future we are planning to remove these bundles from the library and publish them externally so they can be consumed at deploy-time but this is out of scope for this PR (see aws/aws-cdk-rfcs#39).



Resolves aws#11874

BREAKING CHANGE: the `@aws-cdk/eks.KubectlLayer` layer class has been moved to `@aws-cdk/lambda-layer-kubectl.KubectlLayer`.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
eladb pushed a commit to cdklabs/decdk that referenced this issue Jan 18, 2022
The EKS module uses the AWS CLI, `kubectl` and `helm` in order to interact with the Kubernetes cluster. These tools were consumed from a SAR app maintained by @pahud as an AWS Sample (see [repo](https://github.com/aws-samples/aws-lambda-layer-kubectl)).

This dependency on sample code introduces an operational and maintenance risk and as part of productizing the EKS module, we need to break it. The dependency on SAR is not required, and adds a few unnecessary layers (a nested stack, SAR regional availability, etc).

To that end, this change bundles the AWS CLI and the Kubernetes tools (`kubectl` and `helm`) into the AWS CDK. These layers are maintained in two new CDK modules called `@aws-cdk/lambda-layer-awscli` and `@aws-cdk/lambda-layer-kubectl` respectively. These are normal CDK modules that export a `lambda.LayerVersion` resource that can be mounted to any AWS Lambda function.

Since the s3-deployment module also needs the AWS CLI (and bundles it), we now reuse the AWS CLI layer in there as well.

Module sizes:
- lambda-layer-awscli: 10MiB
- lambda-layer-kubectl: 24MiB

This change increases the total module size of the MonoCDK by 24MiB (10MiB are reused with s3-deployment which was already bundled). In the future we are planning to remove these bundles from the library and publish them externally so they can be consumed at deploy-time but this is out of scope for this PR (see aws/aws-cdk-rfcs#39).



Resolves #11874

BREAKING CHANGE: the `@aws-cdk/eks.KubectlLayer` layer class has been moved to `@aws-cdk/lambda-layer-kubectl.KubectlLayer`.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@mrgrain mrgrain added status/done Implementation complete and removed status/proposed Newly proposed RFC labels Oct 27, 2023
@mrgrain mrgrain closed this as completed Oct 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
aws-construct-library Cross cutting features in the AWS Construct Library (across L2s) ops Operations status/done Implementation complete
Projects
None yet
Development

No branches or pull requests

6 participants