Skip to content
This repository has been archived by the owner on Sep 11, 2024. It is now read-only.

Use the new Lambda Container runtime #119

Closed
wperron opened this issue Dec 15, 2020 · 13 comments
Closed

Use the new Lambda Container runtime #119

wperron opened this issue Dec 15, 2020 · 13 comments

Comments

@wperron
Copy link
Contributor

wperron commented Dec 15, 2020

AWS recently announced support for Docker runtimes for Lambda. They have provided base images, one of which is the "provided" image for custom runtimes.

Fundamentally the runtime still works the same, it just supports a new upload format on top of the current zip one. All that would need to be done is adding a Dockerfile to the repo. The current runtime and layer could still be published and used the same as they currently are, but the Dockerfile could potentially make it easier to test, and by publishing an image to a public ECR repo it would give yet another option for deploying Deno applications to Lambda.

@hayd
Copy link
Contributor

hayd commented Dec 15, 2020

ooh, that's really nice! I guess we should publish a base deno-lambda image on ecr ( in every region??). Or alternatively have an example Dockerfile with the deno-lambda layer/bootstrap?

FROM public.ecr.aws/lambda/provided:al2

# perhaps remove this in a single run command?
RUN yum install -y unzip

ENV DENO_VERSION=1.6.0

RUN curl -fsSL https://deno.land/x/lambda@${DENO_VERSION}/bootstrap --out ${LAMBDA_RUNTIME_DIR}/bootstrap \
 && chmod 777 ${LAMBDA_RUNTIME_DIR}/bootstrap

RUN curl -fsSL https://github.com/denoland/deno/releases/download/v${DENO_VERSION}/deno-x86_64-unknown-linux-gnu.zip \
         --output deno.zip \
 && unzip -qq deno.zip \
 && rm deno.zip \
 && chmod 777 deno \
 && mv deno /bin/deno

# the above blocks could be packaged to dockerhub...

# Note WORKDIR=${LAMBDA_TASK_ROOT} so the ${LAMBDA_TASK_ROOT} is superfluous.
COPY hello.ts ${LAMBDA_TASK_ROOT}/hello.ts

CMD ["hello.handler"]

and it works!

or perhaps it's okay to publish the top half to dockerhub then users can publish their actual image to ecr...


it's nice to have something aws supported for local testing... (vs https://github.com/hayd/deno-lambda#testing-locally-with-docker-lambda )

I'm not so eager/rushing to rewriting the tests (though it would probably be good to do)... 🤷

@wperron
Copy link
Contributor Author

wperron commented Dec 15, 2020

or perhaps it's okay to publish the top half to dockerhub then users can publish their actual image to ecr...

that would be my instinct yes, similar to what they provide for the default runtimes: they're simply based of the provided image and add the necessary packages for the different runtimes. I would assume the typical workflow with containers in Lambda would be to pull the base image, bake in user-code and publish the image to an internal ecr repo.

I think at least publishing to Docker Hub might make sense to provide this base image, as you say publishing to every region would require some automation to work well.

@hayd
Copy link
Contributor

hayd commented Dec 15, 2020

Will do. I think i need to set BOOTSTRAP_VERSION separately, that way it can use deno-docker publishing (and doesn't need to wait on deno-lambda tagging). This Dockerfile will probably live in two places (there too) but I think that's fine.

I know lambci do (or did) publish to every region... it seems like a lot of effort for little gain.

(This is a really nice thing to release from aws.)

@hayd
Copy link
Contributor

hayd commented Dec 15, 2020

cc @kyeotic will be interesting to see if this lowers cold/warm start time.

@hayd
Copy link
Contributor

hayd commented Dec 16, 2020

This is now published at https://hub.docker.com/r/hayd/deno-lambda
(published in docker-deno)

Usage:

FROM hayd/deno-lambda:1.6.1

COPY hello.ts .
RUN deno cache hello.ts


CMD ["hello.handler"]

Yet to add documentation, but the instructions of aws-lambda-provided work.


docker build -t <image name> .

To run your image locally:

docker run -p 9000:8080 <image name>

In a separate terminal, you can then locally invoke the function using cURL:

curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"payload":"hello world!"}'

will keep this open until documented :)

Edit: I think adding the above in example-docker/ seems a reasonable start.

@hayd hayd pinned this issue Dec 18, 2020
@hayd
Copy link
Contributor

hayd commented Dec 18, 2020

closed by #120

@hayd hayd closed this as completed Dec 18, 2020
@shellscape
Copy link

@hayd did that end up lowering cold start?

@hayd
Copy link
Contributor

hayd commented Aug 15, 2022

The solution I've always gone with is warm starts: setting up an event that runs a fast endpoint for the lambda frequently...

@shellscape
Copy link

Yeah that's an old keep warm strategy. It'll bump your costs on systems with lots of provisioned fns. But still, curious to know if this work lowered cold starts.

@hayd
Copy link
Contributor

hayd commented Aug 15, 2022

It'll bump your costs on systems with lots of provisioned fns.

I don't think this is really the case, given that you're only paying a few ms every N minutes. But 🤷

Once you done a successful deno cache baked into the docker image, I'm not sure there is many better ways to boost cold start.

@shellscape
Copy link

I don't think this is really the case, given that you're only paying a few ms every N minutes. But 🤷

for small systems or lambdas that get called infrequently or are not provisioned to a high degree, yes the cost will be minimal if noticeable at all. if it's a non-critical lambda and it's not part of a larger distributed system, you may not even break out of the free tier. but for lambdas that are part of large distributed systems that need high availability and our mission critical, perhaps even provisioned to hundreds of instances, it can start to have a negative impact on cost. and that's not theoretical, I've actually seen this happen. keepwarm is great until it's not 😄

@kyeotic
Copy link

kyeotic commented Aug 15, 2022

Sending a single request every couple minutes will only keep 1 lambda instance warm. If you get more than one request at a time then you will still get cold starts. Remember, each concurrent requests gets its own container, and if one is not ready it will be cold started. You can try to keep enough containers warm to handle your peak load, but that will be expensive. Anything less and requests will hit cold containers. You might as well use fargate at that point.

Getting cold start into usable territory is key to using lambda effectively.

@shellscape
Copy link

There are round-robin keep warm strategies as well, but highly specialized. Most common strategy I've seen is self-invoke.

We're in agreement cold start is the best focus. That's why I was generally curious about the effects #120 had.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants