-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deployment crashing while zipping assets #3145
Comments
Can you give more information about the content of the folder you are zipping? |
It is a gatsby application with some dependent subprojects, so there are:
|
What about the size and number of files? Is it failing with for a specific Lambda function or all functions? |
25823 files Total size of the directory is 239M, with 236M being node_modules |
@eladb agree @PygmalionPolymorph it would be nice if you could share your code (at least what's generating the folder content) so that we can reproduce. Can you also confirm that the issue is specific to this Lambda function's asset? |
I have another lambda@edge inside the same stack which is being deployed fine, so yes, it is specific to this asset. I don't know whether this occurs on other lambdas as well though, as i only have this stack.. |
Can you at least share the |
Sure, here they are:
And |
I am not 100% comfortable that we need to load the entire directory into memory... |
Not sure it's the case, |
I've had a colleague try the same deployment, and it worked on his machine. I'm not quite sure how to continue debugging this. |
@PygmalionPolymorph can you add a (before L#26 in https://unpkg.com/aws-cdk@0.36.0/lib/archive.js) Is it then crashing after all files have been appended? |
Yeah, i'll investigate, where it crashes exactly... |
Tried this:
And i can see |
The zipping is async and happens after the sync call to Can you console log in the |
Strangely, if i change the |
Oh, right..
I did, but none of those two were called.
|
No, the process ends before that |
Can you add this in output.on('error', (err) => console.log(err));
output.on('warning', (err) => console.log(err)); |
Both don't get called |
Node version? |
v10.16.0 |
What are the differences with your colleague's environnement/machine? |
I just tried with Node 12, but i get the same error
He is also running Arch Linux, and i suppose the same node version. I'll ask him for his machine specs tomorrow. It also runs fine on our Gitlab CI server running alpine, as well as on the MacOS machines of two other colleagues. |
I put an intermediate step into our deployment which removes all symlinks prior to zipping the lambda, but that didnt' seem to solve the problem unfortunately. I just confirmed however, that the zipping is the problem by feeding a path to a previously built zip file into the lambda construct. That went through and the deployment worked fine. As no one else seems to have this problem, i think i will just zip the lambda myself beforehand. |
I got the same issue recently. I tried to debug it and it seems that the issue related to the What i found out: Update: |
I wonder if there is an issue where |
@artyom-melnikov can you provide code to reproduce the issue? |
@jogold I can provide you with the content of the folder on which I always can reproduce this issue (this is node_modules for my project, I also verified that issue remains after I unzip content). https://drive.google.com/open?id=1LPF5cJX9jZAEPscgp4vJlytJTq0dmZVo As for the code - I used this one: (based on zipDirectory('/path/to/layer', '/tmp/1.zip')
.then(() => console.log('Done')); The 'Done' is never printed for that folder, works fine with other Some more observations:
|
@artyom-melnikov cannot reproduce with your zip on WSL using Ubuntu... |
@jogold seems like a platform-related issue then... I can try to debug it on my machine if you want, but for that, I need some pointers, I'm not a nodejs expert |
@artyom-melnikov can you try playing with aws-cdk/packages/aws-cdk/lib/archive.ts Lines 12 to 17 in cd0e9bd
|
@jogold I tried to change those but it didn't result in anything good. I also tried to pin I have a bold guess: files.forEach(file => { // Append files serially to ensure file order
archive.append(fs.createReadStream(path.join(directory, file)), {
name: file,
date: new Date('1980-01-01T00:00:00.000Z'), // reset dates to get the same hash for the same content
});
}); Shouldn't such code result in all files being open simultaneously? I feel like it can be related to the |
@jogold I created a very crude solution that sends files in batch as soon as the previous batch is processed function zipDirectory(directory, outputFile) {
return new Promise((ok, fail) => {
// The below options are needed to support following symlinks when building zip files:
// - nodir: This will prevent symlinks themselves from being copied into the zip.
// - follow: This will follow symlinks and copy the files within.
const globOptions = {
dot: true,
nodir: true,
follow: true,
cwd: directory,
};
const files = glob.sync('**', globOptions); // The output here is already sorted
const output = fs.createWriteStream(outputFile);
const archive = archiver('zip');
archive.on('warning', fail);
archive.on('error', fail);
let i = 0;
archive.on('progress', (progress) => {
if (progress.entries.total === progress.entries.processed) {
// Previous batch has been processed, add next batch
if (i < files.length) {
let j = i;
for (j; j < Math.min(files.length, i + 1000); j++) {
appendFile(archive, directory, files[j]);
}
i = j;
} else {
// Everything done, finalize
archive.finalize();
}
}
});
archive.pipe(output);
if (files.length > 0) {
// Send first file to start processing
appendFile(archive, directory, files[i++]);
} else {
archive.finalize();
}
output.once('close', () => ok());
});
}
function appendFile(archive, directory, file) {
archive.append(fs.createReadStream(path.join(directory, file)), {
name: file,
date: new Date('1980-01-01T00:00:00.000Z')
});
} This solution works for me, processing everything and returning proper callback. Please consider using similar approach |
Can you try with: files.forEach(file => { // Append files serially to ensure file order
archive.file(path.join(directory, file), {
name: file,
date: new Date('1980-01-01T00:00:00.000Z'), // reset dates to get the same hash for the same content
});
});
|
@jogold yes, this solution also works |
@aereal @artyom-melnikov can you try this jogold@d0464b2 |
@jogold that one didn't work for me |
The last one? master...jogold:fix-zip |
@jogold master...jogold:fix-zip works for me! |
@PygmalionPolymorph can you give it a try? |
@artyom-melnikov and this one master...jogold:fix-zip-stream? |
@jogold no, master...jogold:fix-zip-stream is not working |
@artyom-melnikov the one from this commit jogold@1513039? |
@jogold yes, the one which use graceful-fs |
@artyom-melnikov ok reverting to something much simpler, can you try this last one jogold@d34e606? Will then open a PR |
@jogold the last solution jogold@d34e606 works for me! |
To preserve file order using `archiver` files must be appended serially either using stream or buffer (appending by file path does not preserve order even when done serially). Appending using buffer seems to be the only way to solve `EMFILE` errors. Call `fs.stat` before appending to preserve mode. Closes aws#3145, Closes aws#3344, Closes aws#3413
To preserve file order using `archiver` files must be appended serially either using stream or buffer (appending by file path does not preserve order even when done serially). Appending using buffer seems to be the only way to solve `EMFILE` errors. Call `fs.stat` before appending to preserve mode. Closes #3145, Closes #3344, Closes #3413
I'm submitting a ...
What is the current behavior?
After upgrading to CDK 0.36.0, deployment fails without an error message while creating an asset zip file for my lambda. Running deploy in verbose mode, i can see the last debug message printed:
Preparing zip asset from directory: cdk.out/asset.357a0d25d1...
Please tell us about your environment:
Other information
After looking through the corresponding source code (here) it seems like the process is crashing completely, as the
finally
statement (removing the staging directory in/tmp
) is'nt run, the directory still exists.I've made sure there is enough disk space in
/tmp
and it is writable.The text was updated successfully, but these errors were encountered: