Skip to content

Commit

Permalink
improve readme and evaluation script
Browse files Browse the repository at this point in the history
  • Loading branch information
Benedikt Haas committed Feb 6, 2024
1 parent 8c95b44 commit ec1a6a6
Show file tree
Hide file tree
Showing 3 changed files with 37 additions and 16 deletions.
39 changes: 25 additions & 14 deletions automated-testing/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,45 +14,56 @@ The subsequent demonstration showcases *automated testing* and specifically addr

> [!IMPORTANT]
> Make sure that all [system requirements](../utils/requirements.md) are fulfilled.
> Additionally, this demo requires a [self-hosted GitHub Runner](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners) to execute scenarios within a CI workflow. The specific requirements for such a runner are listed [below](#self-hosted-github-runner).
> Additionally, the CI related part of this demo requires a [self-hosted GitHub Runner](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners) to execute scenarios within a [GitHub workflow](https://docs.github.com/en/actions/using-workflows/about-workflows). The specific requirements for such a runner are listed [below](#self-hosted-github-runner).
This demo aims to automatically evaluate predefined test scenarios. For this purpose, a test catalog can be defined using OpenSCENARIO files as contained in the [scenarios](../utils/scenarios) folder. These scenarios can be simulated and evaluated using the [carla-scenario-runner](https://github.com/ika-rwth-aachen/carla-scenario-runner). Thus, a basic [docker-compose template](./template.yml) only includes the `carla-simulator` and a `carla-scenario-runner` Docker service. So, in general, the demo enables the efficient execution of multiple scenario-based tests with CARLA, both in local environments and within an automated GitHub CI process.
This demo aims to automatically evaluate predefined test scenarios. For this purpose, a test catalog can be defined using OpenSCENARIO files as contained in the [scenarios](../utils/scenarios) folder. These scenarios can be simulated and evaluated using the [carla-scenario-runner](https://github.com/ika-rwth-aachen/carla-scenario-runner). Thus, a basic [docker-compose template](../.github/actions/evaluate-scenario/files/template.yml) only includes the `carla-simulator` and a `carla-scenario-runner` Docker service. So, in general, the demo enables the efficient execution of multiple scenario-based tests with CARLA, both in local environments and within an automated GitHub CI process.

### Manual Testing Pipeline

In your local environment, you can evaluate multiple scenarios directly, using the provided top-level `run-demo.sh` script:
In your local environment, you can evaluate multiple scenarios directly, using the provided [run-demo.sh](../run-demo.sh) script:

```bash
# carlos$
./run-demo.sh automated-testing
```
or
This executes the [evaluate-scenarios.sh](./evaluate-scenarios.sh) script with default settings. You can also run this script directly and provide custom overrides to the default values by using specific environment variables, flags and arguments. For a detailed overview, please check the script or run:
```bash
# carlos/automated-testing$
./evaluate-scenarios.sh
./evaluate-scenarios.sh -h
```
The script sequentially evaluates all scenario files in the selected folder. After each scenario run, a detailed evaluation based on the criteria specified in the scenario is presented. An example is shown below.

### Automatic CI Pipeline
<p align="center"><img src="../utils/images/automated-testing-cli.png" width=800></p>

All scenarios within the test catalog are also simulated and evaluated in an automatic [CI pipeline on GitHub](https://github.com/ika-rwth-aachen/carlos/actions/workflows/automated-testing.yml). A detailed look in the [scenarios folder](../utils/scenarios/) shows that a few of them have the postfix `.opt` marking them as optional. This means a failure in test evaluation is allowed for those specific scenarios. The CI pipeline processes required scenarios first, and than considered all optional scenarios. In both cases a job matrix is generated before consecutive jobs are created to simulate the specific scenario. As an example, a workflow is shown below.
#### Self-Hosted GitHub Runner

As mentioned before, a [self-hosted GitHub Runner](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners) needs to be set up in order to run the following pipeline. Apart from ensuring the [system requirements](../utils/requirements.md), the runner currently also needs to be started in a **local session** (i.e. not via SSH, RPD or other tools) and has to have access to the primary "display" (see [X window system](https://en.wikipedia.org/wiki/X_Window_System)). You can validate this by running the following command in the same session where you want to start the runner:
```bash
echo $DISPLAY
```
The result should be something simple like `:1` . If there is anything in front of the colon, the session is most likely not local and thus not suitable for this setup.

### Automated CI Pipeline

All scenarios within the test catalog are also simulated and evaluated in an automated [CI pipeline on GitHub](https://github.com/ika-rwth-aachen/carlos/actions/workflows/automated-testing.yml). A detailed look in the [scenarios folder](../utils/scenarios/) shows that some of them have the postfix `.opt` , marking them as optional. This means a failure in test evaluation is allowed for those specific scenarios and does not determine the success of the entire pipeline. The CI pipeline processes required scenarios first, followed by all optional scenarios. In both cases a job matrix is dynamically created based on the found scenarios, in which each job targets and evaluates a specific scenario. As an example, a workflow is shown below.

<p align="center"><img src="../utils/images/automated-testing-workflow.png" width=800></p>

>[!NOTE]
> Even though the complete pipeline appears to have succeeded, the annotations show that one of the optional scenarios has failed and thus should still be investigated
#### Actions

We provide two [GitHub actions](../.github/actions/) for CARLOS:
- [generate-job-matrix](../.github/actions/generate-job-matrix/)
- [evaluate-scenario](../.github/actions/evaluate-scenario/)

They can be used within a GitHub CI workflow to aggregate a job list of simulation runs, and consecutively run all simulations.
They can be used within a GitHub CI workflow to create a job list of simulation runs, and consecutively run all simulations. A demonstration of this is presented next.

#### Workflow

The workflow combines the different actions and performs simulation evaluation analog to the local `evaluation-scenarios.sh` script:
- [automated-testing.yml](../.github/workflows/automated-testing.yml)

#### Self-Hosted GitHub Runner
- TODO
The workflow presented in [automated-testing.yml](../.github/workflows/automated-testing.yml) combines the different actions and performs simulation evaluation analog to the local `evaluation-scenarios.sh` . It leverages the modularity and customizability of the provided actions by reusing them and configuring them differently. For example, the `generate-job-matrix` allows customizing the `query-string`, which is used for matching and collecting fitting scenarios as a job matrix for following pipeline steps.

### Outlook - Scalability using Orchestration Tools
- TODO

The principles and workflows demonstrated here already show the effectiveness of automating the simulation processes. Certainly, a much higher grade of automation can be achieved by incorporating more sophisticated orchestration tools like [Kubernetes](https://kubernetes.io/docs/concepts/overview/), [Docker Swarm](https://docs.docker.com/engine/swarm/) or others. These tools allow for better scalability, while also simplifying the deployment and monitoring of the services.
14 changes: 12 additions & 2 deletions automated-testing/evaluate-scenarios.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
set -e

usage() {
echo "Usage: $0 [-o][-p][-r] [COMPOSE_TEMPLATE_PATH] [SCENARIO_FOLDER_PATH]"
echo "Usage: $0 [-o][-p][-n] [COMPOSE_TEMPLATE_PATH] [SCENARIO_FOLDER_PATH]"
echo "COMPOSE_TEMPLATE_PATH : Location of Compose file which can be customized through environment variables"
echo "SCENARIO_FOLDER_PATH : Location of folder containing scenario files ending with .xosc*"
echo "o : Set the simulator to offscreen mode"
Expand All @@ -13,6 +13,7 @@ usage() {
echo "Environment variables for customization:"
echo "SIMULATOR_IMAGE : CARLA image that should be used"
echo "SCENARIO_RUNNER_IMAGE : CARLA Scenario Runner image that should be used"
echo "TIME_BETWEEN_EVALS" : Delay between each scenario run in seconds
echo "-----"
echo "Example:"
echo "SIMULATOR_IMAGE=rwthika/carla:dev $0 -r ./template.yml ./scenarios"
Expand All @@ -32,7 +33,7 @@ update-simulator() {

COMPOSE_TEMPLATE_PATH="../.github/actions/evaluate-scenario/files/template.yml"

while getopts "hpn" flag; do
while getopts "hopn" flag; do
case "$flag" in
h)
usage
Expand All @@ -52,23 +53,30 @@ done

shift $(($OPTIND-1)) # return to usual handling of positional args

# default settings if no external overrides provided
export SIMULATOR_IMAGE=${SIMULATOR_IMAGE:-"rwthika/carla-simulator:server"}
export SCENARIO_RUNNER_IMAGE=${SCENARIO_RUNNER_IMAGE:-"rwthika/carla-scenario-runner:latest"}

export COMPOSE_TEMPLATE_PATH=$(realpath ${1:-$COMPOSE_TEMPLATE_PATH})
export SCENARIO_FOLDER_PATH=$(realpath ${2:-"../utils/scenarios"})

export RESTART_SIMULATOR=${RESTART_SIMULATOR:-true}
export TIME_BETWEEN_EVALS=${TIME_BETWEEN_EVALS:-5}

export SIMULATOR_FLAGS=""
export SCENARIO_FILE_NAME=""

trap cleanup EXIT
trap cleanup 0

cleanup() {
echo "Cleaning up..."
RESTART_SIMULATOR=false
docker compose -f $COMPOSE_TEMPLATE_PATH kill
docker compose -f $COMPOSE_TEMPLATE_PATH down
xhost -local:
echo "Done cleaning up."
exit
}

echo "Searching for scenarios in $SCENARIO_FOLDER_PATH ..."
Expand Down Expand Up @@ -96,4 +104,6 @@ for scenario in "${scenarios[@]}"; do
if [ "$RESTART_SIMULATOR" = true ]; then
restart-simulator
fi

sleep $TIME_BETWEEN_EVALS
done
Binary file added utils/images/automated-testing-cli.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit ec1a6a6

Please sign in to comment.