-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compare library performance against AWS-SDK #22
Comments
Do you agree with the following use case ? As a user I want to benchmark Itty-aws against AWS-SDK v2 and v3 to guarantee that it performs at least as well as the tools provided by AWS. Using mcollina/autocannon, the test will compare the performance of 1000 calls of 3 simple lambdas written respectively using AWS-SDKv2, AWS-SDKv3 and Itty-AWS.
The lambda will perform a set of predefined CRUD operations on DynamoDB with its respective package:
|
Sleep on it, they say. As a user I want to benchmark Itty-AWS in various runtimes and against AWS-SDK v2 and v3 to guarantee that it performs at least as well as the tools provided by AWS. Forked from the maxday/lambda-perf project, we will setup a cloud stack that will compare the performance of basic lambdas in various runtimes. We will gather the following metrics : Avg Cold Start duration, Avg Memory Used, Avg duration (excluding cold starts). Each function will be deployed 10 times and will run 10 times. We will have 10 coldstarts and 90 executions per runtime to calculate our metrics. The tested function will do simple API calls: write, read and delete a basic DynamoDB document. Targeted runtimes, to start with:
I'll go with that unless you have another strategy in mind. |
Sorry, been busy with my upcoming release.
Average is good but might be too coarse. Can we collect the raw metrics and plot them on a histogram. I'm interested in the variance as much as the average performance amortized over many calls. |
Yes, I will log detailed data from each call, from which we will build useful metrics: average, histogram, variance and standard deviation, etc. About TTFB to measure latency, that would be a very useful addition to our performance metrics. Also, it is especially important to measure latency from inside the AWS cloud to remove as many network variables as possible. Which validates the approach to measuring SDK calls with a Lambda function. I need to investigate how to measure the latency of SDK functions. This is non-trivial since HTTP calls are wrapped in SDK functions and cannot be monitored directly. I will start with Node.js performance measurement apis to see if we can gather enough data observing the async SDK functions calls. |
Just to keep you posted. I forked the repo and started working on it. I have grasped the global architecture of the benchmarking and it is progressing. However, I only have a few free hours each day to code, so there is still time before the PR. |
As a first step, I suggest using a single one-shot performance benchmark against any standard service. This would test give a fast answer about how itty-aws compares to AWS-SDK. Then we can decide if performance should be systematically tested throughout API and services.
mcollina/autocannon is a maintained benchmarking tool that could be use for this purpose.
bestiejs/benchmark.js is more widely used but has been unmaintained for years now.
The text was updated successfully, but these errors were encountered: