An easy to use load generation and testing framework for anyone.
Report Bug
·
Request Feature
Olaf is simple GUI based tool to orchestrate load testing and load generation. It uses Streamlit as its front end, and Locust as its backend.
Here's why you should use Olaf:
- Easy to set-up and use, load test with click of a button.
- Support for multiple resources out of the box
- Auto backup of all the load test results for tracking load results over time.
Olaf is completely containerized with docker, which also the preferred way of getting started.
To install docker for your machine, refer link
To get a local copy up and running is just one simple command:
docker build -t olaf .
docker run -p 80:8000 olaf
The Olaf dashboard can be accessed using following url with creds olaf / olaf
http://localhost
The project is developed and tested with python3.8. Make sure a compatible version is installed.
-
install poetry from https://python-poetry.org/
-
cd to root folder of the project.
-
create a virtual environment
virtualenv -p python3.8 venv source venv/bin/activate
-
install dependencies
poetry install
After installing relevant dependencies, Olaf can be run using:
PYTHONPATH=./ streamlit run src/streamlit_app/app.py --browser.gatherUsageStats false
The Olaf dashboard can be accessed using following url
http://localhost:8501
The Locust dashboard can be accessed at;
http://localhost:12311/
Olaf is a super simple to get started using following steps:
- Once on Olaf is up and running, you will land on Olaf Dashboard.
- Chose the resource to load test from the sidebar.
- Fill out the parameters required to do the load test.
- Click on
START LOAD SESSION
. - A link will be generated, navigate to the link.
- You will be presented with a locust dashboard. to do the load test.
- Once done, simply click on
STOP LOAD SESSION
from Olaf Dashboard.
You can improve your productivity with following advanced options:
-
Automate Run: The option is supported for every kind of resource. This allows you to automatically run the load test for fixed duration directly from the Olaf dashboard. Simply click on Automate Run and configure the following parameters:
test duration in seconds
: duration for which test is to be run. You can navigate to locust dashboard while test is running.autoquit timeout after completion
: time after which to end load session after load test is complete. You will not be able to navigate to locust dashboard, after load session is ended.users to spawn
: the number of concurrent users to spawn.spawn rate
: the rate at which users to spawn per second.
-
Olaf Schedule (Beta): Olaf schedules are used to generate load in particular shape. This is achieved using a configuration in form of a list. Every element in the list is a JSON configuration specifying the load configuration for a particular duration. For example, to generate load of following description:
- For first 300 seconds, generate load at 2 RPS per user with total of 3 users.
- For the next 30 seconds, generate load at 3 RPS per user with total of 1 user. We can use the following configuration:
[ {"duration": 300, "users": 3, "spawn_rate": 3, "rps": 2}, {"duration": 30, "users": 1, "spawn_rate": 1, "rps": 3} ]
Olaf schedule is only supported for SQS and SNS resource types.
-
Automated Backup of Load Test Report: Olaf supports automatically backing of load test results (locust reports) to s3 bucket. To enable this by configuring following fields in
src/config/config.yaml
.aws_config: key: <aws credential having access to s3 bucket> secret: <aws credential having access to s3 bucket> s3_config: region: <region via which to connect to aws bucket> bucket_name: <bucket where to store load test result> base_path: <path within bucket to store load test result>
Olaf gives support for load test and load generation across multiple Resources out-of-the-box. This section describes various configuration parameters required for each of the Resources:
URL
: Complete URL (including HTTP/HTTPS) to load the test. Any URL parameters can be included as well.Header JSON
: Headers for the request, if any.Load Session Name
: Name of the current session.
URL
: Complete URL (including HTTP/HTTPS) to load the test. Any URL parameters can be included as well.Header JSON
: Headers for the request, if any.List of Query JSON
: JSON body of the request. Each request needs to be one entry in the list. Alternatively this can be uploaded as a text file as well.Load Session Name
: Name of the current session.
ES URL
: The endpoint of Elasticsearch URL.ES Username
: The username of the Elasticsearch instance.ES Password
: The username of the Elasticsearch instance.ES Index Name
: The index under load test.List of Query JSON
: Any ES search query. Each read query needs to be part of list. Alternatively this can be uploaded as a text file as well. Internally, this useses.search
ofelasticsearch-py
.Load Session Name
: Name of the current session.
Lambda ARN
: Lambda ARN to load test.AWS region
: Region of AWS resource under test.AWS access key / AWS secret key
: AWS credentials having access to the resource under test.List of Query JSON
: JSON request to hit lambda with. Each request needs to be part of list. Alternatively this can be uploaded as a text file as well.Load Session Name
: Name of the current session.
Mongo URL
: Mongo URL in SRV formatDatabase
: Database to load test.Collection
: Collection within a database to load test.List of Query JSON
: Any MongoDB search query. Each search query needs to be part of list. Alternatively this can be uploaded as a text file as well.Load Session Name
: Name of the current session.
Sagemaker endpoint
: Endpoint of sagemaker to test.Predictor type
: Pytorch, Sklearn and Tensorflow are the supported predictor types.Input/Output Serializer
: The serializer the model is expected to work with.AWS region
: Region of AWS resource under test.AWS access key / AWS secret key
: AWS credentials having access to the resource under test.List of query JSON
: Input to Sagemaker model endpoint. Each input needs to be part of list. Alternatively this can be uploaded as a text file as well. If the endpoint to be tested is a multi-model endpoint, you are expected to pass the input as a list of dictionaries of format:{"payload": YOUR_INPUT_PAYLOAD, "target_model": MODEL_TAR_FILE_NAME}
Batch Mode
: When enabled, a batch of sizeb
is created by random samplingb
inputs from the list of query JSON. Each batch is then sent as a single request.Load Session Name
: Name of the current session.
SQS Name
: Name (not arn) of the SQS queue to generate load in. 2.AWS region
: Region of AWS resource under test.AWS access key / AWS secret key
: AWS credentials having access to the resource under test.List of query JSON
: Message to be sent to SQS. Each message needs to be part of list. Alternatively this can be uploaded as a text file as well.message Attribute JSON
: Message Attribute to be sent with every message. We currently do not support message attribute per message.Load Session Name
: Name of the current session.
SNS ARN
: ARN of the SNS topic to generate load in.AWS region
: Region of AWS resource under test.AWS access key / AWS secret key
: AWS credentials having access to the resource under test.List of query JSON
: Message to be sent to SNS. Each message needs to be part of list. Alternatively this can be uploaded as a text file as well.message Attribute JSON
: Message Attribute to be sent with every message. We currently do not support message attribute per message.Load Session Name
: Name of the current session.
API KEY
: API Key of the VectorDBENVIRONMENT NAME
: ENV name of the AWS regionIndex Name
: Index name of the VectorDBList of query JSON
: List of Vector QueryLoad Session Name
: Name of the current session.
Cocktail is multi-resource load testing/generation at once. You can generate loads of following forms:
- Generate load to multiple REST endpoints at once.
- Generate load to multiple SQS at once.
- Generate load to SQS and REST Resources at once.
In a nutshell, you can mix and match any of the supported Resources while load testing.
We currently support upto 5 different types of Resources at once.
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the Apache 2.0 License. See LICENSE.txt
for more information.
Licenses of external dependency (as mention in ./pyproject.toml
) can be found in ./external_licenses.txt
© 2022