-
-
Notifications
You must be signed in to change notification settings - Fork 456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TimeoutError: Connection timed out after 120000ms #611
Comments
I think this may be due to AWS Lambda upload timeout, your I don't think you're hitting Lambda 50 MB limit as there should be a different error message. I believe you should be able to set the |
@dphang Thanks, I will try it and update. I dont understand why files are so big, my project is a standard project. Why aws has this limitation anyway? facing too many issues for simple deploy :( |
What's the output when you run |
Do you have a
|
I have already fixed the credentials issue, the deployment process passed that phase. I set the right environment variables. now trying with |
Still got the same error... even after running |
Ah, sorry about that, I've been using regular serverless too much as well so I confused the two, I think Since this is a serverless component we call
aws-sdk client timeout in a similar way as serverless/serverless#937? (We should make the change in this component, but it would need a PR).
Besides this, I don't think it should normally be taking more than 120 seconds to upload the Lambda. Do you have a slow network connection? API lambda folder size of 189 MB is probably close to 50 MB zipped, which is 400 Mbits, to upload in 120 seconds you need at least 3.33 Mbit/s upload speed. It would be fine in CI/CD, but on your computer, I understand it may not. Alternatively, try to reduce any large dependencies you may have to reduce the API Lambda size. |
I have a fast internet connection. I really don't understand, it seems a common use-case to me. Am I the only one that uploads next.js project into aws lambda? Something doesn't make any sense. should I switch to Google Cloud function? |
I see, thanks for clarifying. If you have a fast connection, then I really don't understand why it should take more than 120 seconds either. Yes, uploading to AWS Lambda is what this component does and is a common use case. What I meant is that I've not seen a case where it took more than 120 seconds to upload to As for other options to help debug (besides what I've given above):
In the meantime, I'll see about creating a PR to allow users to set the Also, Serverless-next.js currently just targets AWS, and I don't have experience with Google Cloud, so unfortunately I can't help much with that... |
@oran1248 yeah, for (1) that was just to see if your zip and API Lambda works if uploaded another way e.g using S3, maybe 69 seconds seems ok to me (assuming it had to create a new CloudFront distribution?). Another small thing I thought of, maybe even if your connection is fast but you are quite far from Sorry for all the trouble, but at least we now know of some places to improve upon. PS: Also I noticed you are using 1.15.1, it might be worth upgrading to 1.17 for the latest fixes and features. Though I don't think the Lambda upload code was modified then, so it probably won't fix this issue. |
The lambdas should all be in |
When you create a new app deployment (e.g for the first time) and also if you did not sync the
It's pretty close considering 50 MB is the limit. But it can happen if you have a bunch of API routes and dependencies, as the normal target will bundle all dependencies into each route.
Yes, Vercel is obviously a great choice as they made Next.js after all, so they would have all the features and optimizations. I'm pretty sure they also use vanilla AWS Lambda instead of Lambda@Edge for their SSR pages / API routes. It has a simpler UX and is well-integrated into GitHub, Bitbucket etc. However, do note their limits, and if you need more than the limits, it can cost you (e.g for a team/business it's $20/month/user). Personally I found that Vercel's page performance on cold start is worse. I think it's because they use The advantage of |
Let's say I'm also using S3 to store user's file on my site, how is this related to the fact I'm deploying with
So I need to zip
I don't know if creating my own script is a good idea, I'm guessing you are doing a lot of magic things inside
So the problem is that |
Well, that's probably not directly related. I mean things like you can share ACM certificates with other services like your API, use Web Application Firewall, etc.
See here: https://aws.amazon.com/about-aws/whats-new/2015/05/aws-lambda-supports-uploading-code-from-s3/. I think you still need to create Lambda function, but you can specify where the ZIP is coming from, either uploaded or from S3 bucket. This was just a suggestion to rule out whether it was a problem with your ZIP.
Sure, there are some complex parts in this code, but the problem here seemed isolated to the Lambda creation/upload itself. So the idea was to upload via the
Yea, you would need to look at CONTRIBUTING.md. But it may take some time to understand the code, so it is up to you.
Yup, and I want to help too. I was providing some suggestions to help you debug, but it will take a bit of elbow grease. I think allowing setting an increased Also, one other thing you can try is: |
I will give it a try.
It worked! so it is the bundle size?
I think this is my best option for now. |
Glad it worked. It may be bundle size but I thought Lambda upload would usually give a failure message saying it is too big. Perhaps it's a bug with the Lambda upload endpoint? Maybe we can add checks for zip size and add a warning (or even fail the build if we know the zip is over the 50 MB limit) |
That would be great. Now that we know it's the zip size, there is no option to upload a zip file that is more than 50MB at all? That sounds strange, because as I said I'm sure there are projects that are larger than mine. |
Yup, if you search the issues, a few people had similar issues. For example #141 (comment) Currently API and pages are their own Lambda@Edge but then each has a 50 MB limit per AWS. So the serverless-trace target support was added, which reduces code size by maintaining one set of dependencies (instead of bundling all dependencies into each page/route). But there were some caveats there, like potentially slower performance in my experience. I think there was some thought around multiple cache behaviors so there is a lambda for each route but there are some AWS limitations e.g max of 25 behaviors unless you ask for a quota increase from AWS, and also it adds complexity to this component. Vercel solves this since they have their own custom CDN/routing layer and I believe they split Lambdas (they seem to use regular Lambdas, not Lambda@Edge). Since this component is based on CloudFront/Lambda@Edge, there are more limitations there inherent to AWS. Anyway, thanks for the good discussion - it gave me a few ideas on improving the documentation and/or code for a better developer experience. |
@dphang |
Yup, no worries. For completeness sake, I forgot to mention you can also minimize your Next.js build outputs using something like this in (Use webpack: (config, { buildId, dev, isServer, defaultLoaders, webpack }) => {
if (isServer && !dev && process.env.NEXT_MINIMIZE === "true") {
config.optimization = {
minimize: true,
minimizer: [
new TerserPlugin({
parallel: true,
cache: true,
terserOptions: {
output: { comments: false },
mangle: true,
compress: true,
},
extractComments: false,
}),
],
};
}
return config;
} I'll close the issue for now and work on improving the docs/code in the near future. |
Describe the bug
When I run
serverless
command, after something like ~6 minutes, I get the following error:TimeoutError: Connection timed out after 120000ms
api-lambda
folder size is 189MBdefault-lambda
folder size is 100MBTo Reproduce
I just run
serverless
.Expected behavior
Deployed successfully.
Screenshots
Desktop (please complete the following information):
The text was updated successfully, but these errors were encountered: