-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws-s3-deployment timeout on larger dataset #4058
Comments
I just ran into this issue as well, in typescript. The s3 sync command tries to get the asset from the staging bucket, but if the asset is larger than the deployment lambda's memorySize of 128 MB, then the lambda hits the memory limit and the s3 sync command pauses. I used the following typescript code to increase the memory size: It would be nice to at least be able to properly set the memorySize for the BucketDeployment's CustomResourceHandler, or better yet, set the memorySize based off the size of the asset. |
I think this could be more simply remedied by buffering this process: aws-cdk/packages/aws-cdk/lib/assets.ts Lines 123 to 131 in e1a5739
Thoughts? |
Yes, at the minimum we should allow you to specify the lambda memory limit. @nmussy not sure I understand the buffering idea. |
The contents of the files are wholly loaded in memory and uploaded to S3. Instead of increasing the RAM to whatever the asset size will be (and then some for overhead), we could do a multi-part S3 upload, loading the file pieces in memory as needed. |
when deploying large files, users may need to increase the resource handler's memory configuration. note: since custom resource handlers are singletons, we need to provision a handler for each memory configuration defined in the app. we do this by simply adding a suffix to the uuid of the singleton resource that includes the memory limit. fixes #4058
Implementation of memory limit configuration: #4204 |
* feat(s3-deployment): allow specifying memory limit when deploying large files, users may need to increase the resource handler's memory configuration. note: since custom resource handlers are singletons, we need to provision a handler for each memory configuration defined in the app. we do this by simply adding a suffix to the uuid of the singleton resource that includes the memory limit. fixes #4058 * alphabetize imports
We've also had this - and the zip file of our static react site to go to S3 is only 138 mb. |
Using CDK to deploy the question/solution images to S3 was causing deployment memory to be exceeded (aws/aws-cdk#4058) This commit uses the AWS CLI from GitHub Actions instead for syncing static assets
Using CDK to deploy the question/solution images to S3 was causing deployment memory to be exceeded (aws/aws-cdk#4058) This commit uses the AWS CLI from GitHub Actions instead for syncing static assets
🐛 Bug Report
What is the problem?
Deployment to an s3 bucket with the s3-deployment module times out on a moderately sized dataset containing 6 zip files with a combined size of 271MB. Error messages differ, I got these:
Failed to create resource. Command '['python3', '/var/task/aws', 's3', 'cp', 's3://cdktoolkit-stagingbucket-mhglm1j9x6gh/assets/[...]f5.zip', '/tmp/tmp8p62yjtl/archive.zip']' died with <Signals.SIGKILL: 9>
and:
Custom Resource failed to stabilize in expected time
Synching the same contents with the AWS CLI works in about 4min.
Running with the same code as quoted below, but small sized test files, all is good, too and the files appear in the destination bucket.
I did not see any timeout/data size warnings in the docs, other than a lambda execution limit of 15min, but I am not sure this is applicable here.
Reproduction Steps
I used the Python code below with AWS CDK 1.8.0 (build 5244f97).
Verbose Log
See error messages above.
Environment
Other information
n/a
The text was updated successfully, but these errors were encountered: