This directory contains scripts that facilitate building and running tests. We are using python scripts as entrypoint for our tests because that gives us the opportunity to run tests using the same commandline regardless of the platform you are using.
Builds gRPC in given language and runs unit tests. Use tools/run_tests/run_tests.py --help
for more help.
tools/run_tests/run_tests.py -l csharp -c dbg
--use_docker
Builds a docker container containing all the prerequisites for given language and runs the tests under that container.--build_only
Only build, do not run the tests.
Runs tests for cross-platform/cross-language interoperability. For more details, see Interop tests descriptions The script is also capable of running interop tests for grpc-java and grpc-go, using sources checked out alongside the ones of the grpc repository.
tools/run_tests/run_interop_tests.py -l csharp -s c++ --use_docker
(run interop tests with C# client and C++ server)
Note: if you see an error like no space left on device
when running the
interop tests using Docker, make sure that Docker is building the image files in
a location with sufficient disk space.
Runs predefined benchmark scenarios for given languages. Besides the simple configuration of running all the scenarios locally, the script also supports orchestrating test runs with client and server running on different machines and uploading the results to BigQuery.
tools/run_tests/run_performance_tests.py -l c++ node
--regex
use regex to select particular scenarios to run.
Runs modified interop tests clients and servers under heavy load for an extended period of time to discover potential stability issues. The tests are internally using Kubernetes to run the client and server on GKE and upload statistics to BigQuery.
tools/run_tests/stress_test/run_on_gke.py --gcp_project_id=<google-cloud-platform-project-id> --config_file=<path-to-config-file>
The directory tools/run_tests/stress_test/configs/
contains the config files for several scenarios
A generalized framework for running predefined tasks based on their labels. We use this to building binary artifacts & distrib packages and testing them)
tools/run_tests/task_runner.py -f python artifact linux x64
(build tasks with labels python
, artifact
, linux
, and x64
)