-
Notifications
You must be signed in to change notification settings - Fork 42
Use the new Lambda Container runtime #119
Comments
ooh, that's really nice! I guess we should publish a base deno-lambda image on ecr ( in every region??). Or alternatively have an example Dockerfile with the deno-lambda layer/bootstrap? FROM public.ecr.aws/lambda/provided:al2
# perhaps remove this in a single run command?
RUN yum install -y unzip
ENV DENO_VERSION=1.6.0
RUN curl -fsSL https://deno.land/x/lambda@${DENO_VERSION}/bootstrap --out ${LAMBDA_RUNTIME_DIR}/bootstrap \
&& chmod 777 ${LAMBDA_RUNTIME_DIR}/bootstrap
RUN curl -fsSL https://github.com/denoland/deno/releases/download/v${DENO_VERSION}/deno-x86_64-unknown-linux-gnu.zip \
--output deno.zip \
&& unzip -qq deno.zip \
&& rm deno.zip \
&& chmod 777 deno \
&& mv deno /bin/deno
# the above blocks could be packaged to dockerhub...
# Note WORKDIR=${LAMBDA_TASK_ROOT} so the ${LAMBDA_TASK_ROOT} is superfluous.
COPY hello.ts ${LAMBDA_TASK_ROOT}/hello.ts
CMD ["hello.handler"] and it works! or perhaps it's okay to publish the top half to dockerhub then users can publish their actual image to ecr... it's nice to have something aws supported for local testing... (vs https://github.com/hayd/deno-lambda#testing-locally-with-docker-lambda ) I'm not so eager/rushing to rewriting the tests (though it would probably be good to do)... 🤷 |
that would be my instinct yes, similar to what they provide for the default runtimes: they're simply based of the I think at least publishing to Docker Hub might make sense to provide this base image, as you say publishing to every region would require some automation to work well. |
Will do. I think i need to set BOOTSTRAP_VERSION separately, that way it can use deno-docker publishing (and doesn't need to wait on deno-lambda tagging). This Dockerfile will probably live in two places (there too) but I think that's fine. I know lambci do (or did) publish to every region... it seems like a lot of effort for little gain. (This is a really nice thing to release from aws.) |
cc @kyeotic will be interesting to see if this lowers cold/warm start time. |
This is now published at https://hub.docker.com/r/hayd/deno-lambda Usage: FROM hayd/deno-lambda:1.6.1
COPY hello.ts .
RUN deno cache hello.ts
CMD ["hello.handler"] Yet to add documentation, but the instructions of aws-lambda-provided work.
To run your image locally:
In a separate terminal, you can then locally invoke the function using cURL:
will keep this open until documented :) Edit: I think adding the above in |
closed by #120 |
@hayd did that end up lowering cold start? |
The solution I've always gone with is warm starts: setting up an event that runs a fast endpoint for the lambda frequently... |
Yeah that's an old keep warm strategy. It'll bump your costs on systems with lots of provisioned fns. But still, curious to know if this work lowered cold starts. |
I don't think this is really the case, given that you're only paying a few ms every N minutes. But 🤷 Once you done a successful deno cache baked into the docker image, I'm not sure there is many better ways to boost cold start. |
for small systems or lambdas that get called infrequently or are not provisioned to a high degree, yes the cost will be minimal if noticeable at all. if it's a non-critical lambda and it's not part of a larger distributed system, you may not even break out of the free tier. but for lambdas that are part of large distributed systems that need high availability and our mission critical, perhaps even provisioned to hundreds of instances, it can start to have a negative impact on cost. and that's not theoretical, I've actually seen this happen. keepwarm is great until it's not 😄 |
Sending a single request every couple minutes will only keep 1 lambda instance warm. If you get more than one request at a time then you will still get cold starts. Remember, each concurrent requests gets its own container, and if one is not ready it will be cold started. You can try to keep enough containers warm to handle your peak load, but that will be expensive. Anything less and requests will hit cold containers. You might as well use fargate at that point. Getting cold start into usable territory is key to using lambda effectively. |
There are round-robin keep warm strategies as well, but highly specialized. Most common strategy I've seen is self-invoke. We're in agreement cold start is the best focus. That's why I was generally curious about the effects #120 had. |
AWS recently announced support for Docker runtimes for Lambda. They have provided base images, one of which is the "provided" image for custom runtimes.
Fundamentally the runtime still works the same, it just supports a new upload format on top of the current zip one. All that would need to be done is adding a Dockerfile to the repo. The current runtime and layer could still be published and used the same as they currently are, but the Dockerfile could potentially make it easier to test, and by publishing an image to a public ECR repo it would give yet another option for deploying Deno applications to Lambda.
The text was updated successfully, but these errors were encountered: