This repository has been archived by the owner on Jan 18, 2021. It is now read-only.
[Tests-Only] Add tests/acceptance/expected-failures.txt #394
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This is a demonstration of how we can use the
EXPECTED_FAILURES_FILE
feature of the core acceptance test runner.That feature is waiting in core PR owncloud/core#37717
core branch
adjust-skip-tags-expected-failures
has many of theskipOnOcis
tags removed from scenarios that we expect to pass on OCIS one day, but do not pass yet. (Actually for now I changed the tag towasSkipOnOcis
just so that I can find them easily). See owncloud/core#37725 - So a lot more API acceptance test scenarios run in the CI here. That increases the elapsed run-time of the tests in CI, but it means that a developer gets to know if a different set of scenarios fail.tests/acceptance/expected-failures.txt
has a list of the scenarios that are currently expected to fail. The test runner checks the actual failures against the expected failures and reports any differences. In this example the 200 scenarios that fail are exactly the 200 scenarios listed inEXPECTED_FAILURES_FILE
If a developer improves the OCIS code, and more core API acceptance tests are passing, then the developer just deletes those scenario numbers from
tests/acceptance/expected-failures.txt
. There is no need to find the scenarios in core and unskip them there.To demonstrate bugs/different behavior in OCIS, the scenarios that demonstrate the "bad behavior" can be put here in the ocis-reva repo. See the 3rd commit. An extra pipeline step has been added to run these local acceptance test scenarios.
When a developer fixes a problem that has a local acceptance test that demonstrates the old behavior, they would delete that test scenario and delete the now-passing core acceptance test from the
EXPECTED_FAILURES_FILE
, since OCIS will now conform more to the core API acceptance tests.The concept/principle is that the core API acceptance tests are the "defined standard" for the API. Those can be improved to provide better coverage or corrected if they are not correct, but they should not need to be changed often. As OCIS develops, the deviation from this "defined standard" can be managed in the OCIS repo via
EXPECTED_FAILURES_FILE
and supplementary local acceptance test scenarios.Related Issues
#282 Split old public API webdav tests from new public webdav tests
we probably will not need to do anything in core - we can just list the known fails in
EXPECTED_FAILURES_FILE
#329 Find untagged tests that need adjusting for OCIS (and the comment at #329 (comment) )
we can control all the skip-tagging that currently causes the real day-to-day pain by using
EXPECTED_FAILURES_FILE
and local acceptance tests in OCIS repos that demonstrate different behavior