Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bucket deployments #936

Closed
eladb opened this issue Oct 16, 2018 · 5 comments
Closed

Bucket deployments #936

eladb opened this issue Oct 16, 2018 · 5 comments

Comments

@eladb
Copy link
Contributor

eladb commented Oct 16, 2018

There are use cases where it would be useful to be able to populate an S3 bucket with contents as part of a stack deployment. The most notable example is deploying the contents of a bucket for a static website.

The low-level building block to enable such use cases is a BucketDeployment construct which allows deploying a bucket from a .zip file stored in another bucket. Then, we could use a ZipDirectoryAsset to create the zip file in the assets bucket.

const asset = new assets.ZipDirectoryAsset(this, 'WebsiteFiles', {
  path: 'dist/'
});

new s3.BucketDeployment(this, 'DeployWebSite', {
  sourceBucketName: asset.s3BucketName,
  sourceObjectKey: asset.s3ObjectKey,
  destinationBucket: destBucket,
  destinationObjectPrefix: 'mywebsite/'
});

The nice thing about this approach is that if the contents of the local directory changes, a new asset will be uploaded with a new S3 key, so the bucket deployment resource will be updated.

This construct will install a custom resource which will, upon create/update:

  1. Download website.zip from the source bucket.
  2. Extract the zip file.
  3. Sync (upload/delete) the destination bucket.

Upon delete:

  1. Clear the destination bucket (I guess?). We can probably make this configurable.

Requirements

  • Sync, and not just upload: if a file is deleted from the source, it should also be deleted from the destination.
  • Support large files, Lambda 15min timeout should suffice, but we should make sure we document this limitation.
  • Use assets, so CI/CD story will work seamlessly
  • Upload to a prefix based on the source hash for atomic updates
  • Support both zipped source and "flattened" source (in which case we basically replicate the source to the destination)
@rix0rrr
Copy link
Contributor

rix0rrr commented Oct 16, 2018

  • Upload to a prefix for atomic updates
  • How about S3 copying instead of downloading/unzipping?

@eladb
Copy link
Contributor Author

eladb commented Oct 16, 2018

@rix0rrr added

@mindstorms6
Copy link
Contributor

mindstorms6 commented Oct 16, 2018

In our copier - we have historically supported 2 modes
"SUBFOLDER" and "ROOT"

In subfolder mode, each new copy job was a new S3 prefix (subfolder), and we used that ensure you'd only get the assets from the current copy. (without having to delete/cleanup)
In root mode, we'd just copy new contents over to the root of the dest bucket and leave everything else (no clean)

We made these modes because often, if you're serving a static website for example, you may not want to actually clean your old assets, and still allow them to be served. Most SPA frameworks do like app.js.45c4aa or something, and it's your index.html/manifest that points to the new stuff. Old browsers that hadn't reloaded don't break then.

We exposed these to the customer, to let them figure out what the right thing is for their usecase.

@eladb
Copy link
Contributor Author

eladb commented Oct 16, 2018

@mindstorms6 how did you determine the prefix for the SUBFOLDER mode? Did you return the actual key as an attribute so it could then be used e.g. to tell CloudFront where to serve the new content?

@mindstorms6
Copy link
Contributor

Exactly that - in subfolder mode, we'd either generate (or in some internal use cases, re-use) a UUID that was then returned as 'OriginPath' from the resource - then we'd update the CloudFront origin.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants