Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
80 changes: 57 additions & 23 deletions backends/test/suite/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,49 +5,83 @@ This directory contains tests that validate correctness and coverage of backends
These tests are intended to ensure that backends are robust and provide a smooth, "out-of-box" experience for users across the full span of input patterns. They are not intended to be a replacement for backend-specific tests, as they do not attempt to validate performance or that backends delegate operators that they expect to.

## Running Tests and Interpreting Output
Tests can be run from the command line, either using the runner.py entry point or the standard Python unittest runner. When running through runner.py, the test runner will report test statistics, including the number of tests with each result type.
Tests can be run from the command line using pytest. When generating a JSON test report, the runner will report detailed test statistics, including output accuracy, delegated nodes, lowering timing, and more.

Backends can be specified with the `ET_TEST_ENABLED_BACKENDS` environment variable. By default, all available backends are enabled. Note that backends such as Core ML or Vulkan may require specific hardware or software to be available. See the documentation for each backend for information on requirements.
Each backend and test flow (recipe) registers a pytest [marker](https://docs.pytest.org/en/stable/example/markers.html) that can be passed to pytest with the `-m marker` argument to filter execution.

Example:
To run all XNNPACK backend operator tests:
```
ET_TEST_ENABLED_BACKENDS=xnnpack python -m executorch.backends.test.suite.runner
pytest -c /dev/nul backends/test/suite/operators/ -m backend_xnnpack -n auto
```

To run all model tests for the CoreML static int8 lowering flow:
```
pytest -c /dev/nul backends/test/suite/models/ -m flow_coreml_static_int8 -n auto
```
2465 Passed / 2494
16 Failed
13 Skipped

[Success]
736 Delegated
1729 Undelegated
To run a specific test:
```
pytest -c /dev/nul backends/test/suite/ -k "test_prelu_f32_custom_init[xnnpack]"
```

[Failure]
5 Lowering Fail
3 PTE Run Fail
8 Output Mismatch Fail
To generate a JSON report:
```
pytest -c /dev/nul backends/test/suite/operators/ -n auto --json-report --json-report-file="test_report.json"
```

Outcomes can be interpreted as follows:
* Success (delegated): The test passed and at least one op was delegated by the backend.
* Success (undelegated): The test passed with no ops delegated by the backend. This is a pass, as the partitioner works as intended.
* Skipped: test fails in eager or export (indicative of a test or dynamo issue).
* Lowering fail: The test fails in to_edge_transform_and_lower.
* PTE run failure: The test errors out when loading or running the method.
* Output mismatch failure: Output delta (vs eager) exceeds the configured tolerance.
See [pytest-json-report](https://pypi.org/project/pytest-json-report/) for information on the report format. The test logic in this repository attaches additional metadata to each test entry under the `metadata`/`subtests` keys. One entry is created for each call to `test_runner.lower_and_run_model`.

Here is a excerpt from a test run, showing a successful run of the `test_add_f32_bcast_first[xnnpack]` test.
```json
"tests": [
{
"nodeid": "operators/test_add.py::test_add_f32_bcast_first[xnnpack]",
"lineno": 38,
"outcome": "passed",
"keywords": [
"test_add_f32_bcast_first[xnnpack]",
"flow_xnnpack",
"backend_xnnpack",
...
],
"metadata": {
"subtests": [
{
"Test ID": "test_add_f32_bcast_first[xnnpack]",
"Test Case": "test_add_f32_bcast_first",
"Subtest": 0,
"Flow": "xnnpack",
"Result": "Pass",
"Result Detail": "",
"Error": "",
"Delegated": "True",
"Quantize Time (s)": null,
"Lower Time (s)": "2.881",
"Output 0 Error Max": "0.000",
"Output 0 Error MAE": "0.000",
"Output 0 SNR": "inf",
"Delegated Nodes": 1,
"Undelegated Nodes": 0,
"Delegated Ops": {
"aten::add.Tensor": 1
},
"PTE Size (Kb)": "1.600"
}
]
}
```

## Backend Registration

To plug into the test framework, each backend should provide an implementation of the Tester class, defined in backends/test/harness/tester.py. Backends can provide implementations of each stage, or use the default implementation, as appropriate.

At a minimum, the backend will likely need to provide a custom implementation of the Partition and ToEdgeTransformAndLower stages using the appropriate backend partitioner. See backends/xnnpack/test/tester/tester.py for an example implementation.

Once a tester is available, the backend flow(s) can be added in __init__.py in this directory by adding an entry to `ALL_TESTER_FLOWS`. Each flow entry consists of a name (used in the test case naming) and a function to instantiate a tester for a given model and input tuple.
Once a tester is available, the backend flow(s) can be added under flows/ and registered in flow.py. It is intended that this will be unified with the lowering recipes under executorch/export in the near future.

## Test Cases

Operator test cases are defined under the operators/ directory. Tests are written in a backend-independent manner, and each test is programmatically expanded to generate a variant for each registered backend flow. The `@operator_test` decorator is applied to each test class to trigger this behavior. Tests can also be tagged with an appropriate type specifier, such as `@dtype_test`, to generate variants for each dtype. The decorators and "magic" live in __init__.py in this directory.
Operator test cases are defined under the operators/ directory. Model tests are under models/. Tests are written in a backend-independent manner, and each test is programmatically expanded to generate a variant for each registered backend flow by use of the `test_runner` fixture parameter. Tests can additionally be parameterized using standard pytest decorators. Parameterizing over dtype is a common use case.

## Evolution of this Test Suite

Expand Down
Loading