Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparison between AWS Web Adapter and (Python) Mangum + Fastapi #283

Closed
gabriels1234 opened this issue Sep 13, 2023 · 10 comments
Closed

Comparison between AWS Web Adapter and (Python) Mangum + Fastapi #283

gabriels1234 opened this issue Sep 13, 2023 · 10 comments

Comments

@gabriels1234
Copy link

Hi, Just saw the presentation and was thinking, performance-wise (mainly cold-start time) how would this compare to using the Mangum adapter + a framework (such as fastapi).

Thanks!

@bnusunny
Copy link
Contributor

bnusunny commented Sep 14, 2023

I don't have exact numbers to share. In general, a framework adds a bit of cold-start time because it starts a full-blown web server.

Fastapi with Uvicorn usually adds about 100~200ms cold-start time. Usually it is a worthy trade off for the productivity and portability gain.

The main goal of this project is to provide an easy on-ramp for people new to Lambda, and wants to start building a Serverless Web App using familar tools and frameworks. In addition, people found that this tool help to migrate existing web apps to Lambda without requiring major refactoring of the exiting code base, such as this one.

@gabriels1234
Copy link
Author

I agree, portability is the key-word.
I wouldn't be so humble to say this is just an easy on-ramp, perhaps this is a true game-changer! (I wish I'd known about this before. I went the Mangum way (which I do not regret)).

Regarding the overhead, 100-200 ms doesn't bother me at all.
My question is: how about the full startup (cold-start) time (consider no db connection, just a hello-world using: minimal Dockerfile (python Alpine?), uvicorn, fastapi). I'd like to know that number (which is very hard to measure from the outside of AWS)

@gabriels1234
Copy link
Author

Moreover: would you say that in absolute numbers, an adapter made of Mangum vs the AWS lambda adapter, is Mangum going to be faster? by many ms?

@bnusunny
Copy link
Contributor

I will do a test in this weekend and post the resultes here.

@bnusunny
Copy link
Contributor

bnusunny commented Sep 17, 2023

Here come the results: the cold start time is actually pretty close for LWA and Mangum, within 100ms.

Test setup: two Lambda functions (256MB memory) with Http API endopoints, triggered every 10 minutes to ensure we hit cold start every time. I collected 144 data pointes over 12 hours (excluding the first a few outliers caused by the cold cache in Lambda). The latenency data is from Http Api's IntegrationLatency metric, which is the end-to-end latency of Lambda invoke.

Here is the Http Api IntegrationLantecy over different percentiles. LWA is faster at high percentil p99 and p100. Mangum is faster at lower percentil P90 to P0. The difference is less than 100ms.

image

Here is the lantency graph over 12 hours.

image

And here are two samples of x-ray traces.

  • LWA
    image

  • Mangum
    image

@bnusunny
Copy link
Contributor

Here is the function code and docker files used in the test.

  1. LWA + FastAPI
  • the code
from fastapi import FastAPI

app = FastAPI()


@app.get("/")
async def root():
    print("in root method")
    return {"message": "Hello World"}
  • the Dockerfile
FROM public.ecr.aws/docker/library/python:3.11.5-slim
COPY --from=public.ecr.aws/awsguru/aws-lambda-adapter:0.7.1 /lambda-adapter /opt/extensions/lambda-adapter
ENV PORT=8000
WORKDIR /var/task
COPY requirements.txt ./
RUN python -m pip install -r requirements.txt
COPY *.py ./
CMD exec uvicorn --port=$PORT main:app
  1. Mangum + FastAPI
  • the code
from fastapi import FastAPI
from mangum import Mangum

app = FastAPI()


@app.get("/")
async def root():
    print("in root method")
    return {"message": "Hello World"}

handler = Mangum(app, lifespan="off")
  • the Dockerfile
FROM public.ecr.aws/docker/library/python:3.11.5-slim
ENV PORT=8000
WORKDIR /var/task
COPY requirements.txt ./
RUN python -m pip install -r requirements.txt
COPY *.py ./
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD ["main.handler"]

@gabriels1234
Copy link
Author

Thanks so much!
The numbers you got are good enough to start testing it on real projects.
The only metric missing would be from a warm-start, would those 100ms be equally present with the lambda adapter vs mangum?

@bnusunny
Copy link
Contributor

bnusunny commented Sep 18, 2023

No, for warm-start, it is very close: almost identical at p50.

  • LWA
Summary:
  Total:        10.5154 secs
  Slowest:      0.6041 secs
  Fastest:      0.0103 secs
  Average:      0.0188 secs
  Requests/sec: 950.9877
  
  Total data:   250000 bytes
  Size/request: 25 bytes

Response time histogram:
  0.010 [1]     |
  0.070 [9906]  |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.129 [92]    |
  0.188 [0]     |
  0.248 [0]     |
  0.307 [0]     |
  0.367 [0]     |
  0.426 [0]     |
  0.485 [0]     |
  0.545 [0]     |
  0.604 [1]     |


Latency distribution:
  10% in 0.0144 secs
  25% in 0.0157 secs
  50% in 0.0174 secs
  75% in 0.0194 secs
  90% in 0.0221 secs
  95% in 0.0265 secs
  99% in 0.0631 secs

Details (average, fastest, slowest):
  DNS+dialup:   0.0003 secs, 0.0103 secs, 0.6041 secs
  DNS-lookup:   0.0000 secs, 0.0000 secs, 0.0206 secs
  req write:    0.0000 secs, 0.0000 secs, 0.0024 secs
  resp wait:    0.0181 secs, 0.0102 secs, 0.6035 secs
  resp read:    0.0000 secs, 0.0000 secs, 0.0006 secs

Status code distribution:
  [200] 10000 responses
  • Mangum
Summary:
  Total:        10.0350 secs
  Slowest:      0.1506 secs
  Fastest:      0.0105 secs
  Average:      0.0191 secs
  Requests/sec: 996.5162
  
  Total data:   250000 bytes
  Size/request: 25 bytes

Response time histogram:
  0.011 [1]     |
  0.025 [9522]  |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.039 [340]   |■
  0.053 [23]    |
  0.067 [8]     |
  0.081 [5]     |
  0.095 [0]     |
  0.109 [1]     |
  0.123 [49]    |
  0.137 [45]    |
  0.151 [6]     |


Latency distribution:
  10% in 0.0143 secs
  25% in 0.0157 secs
  50% in 0.0175 secs
  75% in 0.0197 secs
  90% in 0.0221 secs
  95% in 0.0243 secs
  99% in 0.1120 secs

Details (average, fastest, slowest):
  DNS+dialup:   0.0009 secs, 0.0105 secs, 0.1506 secs
  DNS-lookup:   0.0001 secs, 0.0000 secs, 0.0239 secs
  req write:    0.0000 secs, 0.0000 secs, 0.0023 secs
  resp wait:    0.0181 secs, 0.0104 secs, 0.1389 secs
  resp read:    0.0000 secs, 0.0000 secs, 0.0008 secs

Status code distribution:
  [200] 10000 responses

@bnusunny
Copy link
Contributor

bnusunny commented Sep 20, 2023

@gabriels1234 I'm closing this issue. Feel free to open new ones if additional questions come up.

@hamilton-earthscope
Copy link

I just learned about LWA today and had this exact same question. Thank you so much @bnusunny for the thorough comparison. Super helpful!!

@awslabs awslabs locked and limited conversation to collaborators Sep 26, 2024
@bnusunny bnusunny converted this issue into discussion #514 Sep 26, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants