Description
Describe the bug
import { S3 } from '@aws-sdk/client-s3';
import { Handler, Context, S3Event } from 'aws-lambda';
const s3 = new S3({})
export const handler: Handler = async (event: S3Event, context: Context) => {
await s3.getObject({
Bucket: event.Records[0].s3.bucket.name,
Key: event.Records[0].s3.object.key,
});
}
We have this very basic lambda function that reads the file from S3 when a new file is uploaded (we actually consume the Body stream too, left that out for brevity). The function is called intermittently meaning that sometimes we get a new Lambda function (i.e. cold) sometimes the Lambda container is reused. When the container is reused, we sometimes see a ECONNRESET
exception such as this one
2020-05-20T16:50:28.107Z d7a43394-afad-4267-a4a4-5ad3633a1db8 ERROR Error: socket hang up
at connResetException (internal/errors.js:608:14)
at TLSSocket.socketOnEnd (_http_client.js:460:23)
at TLSSocket.emit (events.js:322:22)
at endReadableNT (_stream_readable.js:1187:12)
at processTicksAndRejections (internal/process/task_queues.js:84:21) {
code: 'ECONNRESET',
'$metadata': { retries: 0, totalRetryDelay: 0 }
}
I'm pretty confident that this is due to the keep-alive nature of the https connection. Lambda processes are frozen after they execute and their host seems to terminate open sockets after ~10 minutes. The next time the S3 client tries to reuse the socket, the exception is thrown.
We are running into similar issues with connections to our Aurora database which also terminates intermittently with the same error message (see brianc/node-postgres#2112). It's an error we can easily recover from if we try to reopen the socket but aws-sdk-v3 seems to prefer to throw an error message instead.
Is the issue in the browser/Node.js?
Node.js 12.x on AWS Lambda