-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"terraform apply" uses 3 GB memory and 50% of my PC for uploading 18 small lambdas #9364
Comments
Hi @Dreijnde Any more details about your environment generally would help reproducing and eventually fixing this. |
@Dreijnde hi there! I am sorry that you are having issues. Are you uploading these lambdas as ZIP (or any archive) files? |
I have run the same project as @Dreijnde on OSX and got the same problem. It seems it has to do something with the Lambdas because even if all the infra and lambdas are up as soon as I only change the lambdas this is the terraform behaviour. The lambdas are 'jar' files. I looked at the size of them and they are 10mb or less, so even if all 18 lambdas are in memory it should not be 3gig. I believe creating a terraform file with only 18 lambdas should reproduce this problem, I will see if I can test that statement. |
I tested my theory and it is the lambdas. Here is an example terraform that you can use to reproduce the problem. I included only 3 lambdas as an example but just add more of the same to increase the number. There are actually two problems: First one is dat when creating 18 lambdas from scratch the timeout is to low so you can not create them all at once. I needed to keep adding 2 or 3 lambdas to the terraform file to be able to finally have 18 lambdas. Second is that when changing all 18 lambdas (changing the test.jar) the current issue described in this thread happens. I am doing this on latest OSX with Terraform v0.7.4. |
I took a look at the AWS SDK and currently the AWS SDK requires that lambda function contents be sent as I then looked at the AWS API itself and even the Lambda API expects a JSON object with base64 encoded file contents. So, it isn't possible to stream to this endpoint at all... well, not without some really special sauce. I think what we need to do here in the AWS resource is one of those sad global semaphores to only upload 1 lambda with a zip file at a time. I don't think we need to get fancy with resource tracking or anything: just serialize the lambda code updates (when a zip file is present). |
Hello @Dreijnde and @arminc – sorry to see the trouble here. I have a question, could you please verify that limiting the parrallism helps alleviate the memory issue here? If you could please try running the modifications with the
I'm fairly certain this will help and perhaps work as a workaround for now before I introduce the semaphore. Thanks! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
"terraform apply" uses 3 GB memory and 50% of my PC for uploading 18 small lambdas.

See screenshot.
Is this a memory leak?
I am running Windows 64 bit
The text was updated successfully, but these errors were encountered: