This repository has been archived by the owner on Nov 1, 2023. It is now read-only.
Improve reliability and assurances provided by end-to-end tests #1691
Labels
enhancement
New feature or request
Our end-to-end integration tests create jobs, run, them, and block on the presence of some number of files in blob storage containers. They also assert the absence of error telemetry in certain time windows. They do not validate any file-level properties, and the error telemetry assertions are susceptible to false positive failures due to transient and/or handled errors. Additionally, the current end-to-end tests depend on a complex and hard-to-change and debug custom script. This script does not report test results in an aggregate form.
Let's revisit our end-to-end testing strategy. Ideally, we'd be able to use some kind of standard test runner, increase our resilience to transient errors, continue to log warnings about suspicious telemetry, and make meaningful assertions about file contents.
Related issues:
AB#35939
The text was updated successfully, but these errors were encountered: