diff --git a/acceptance/README.md b/acceptance/README.md index f13ef9872a..faad812729 100644 --- a/acceptance/README.md +++ b/acceptance/README.md @@ -4,16 +4,16 @@ Currently these tests are run against "fake" HTTP server pretending to be Databr To author a test, - add a new directory - - add databricks.yml there - - add script with commands to run, e.g. "$CLI bundle validate" + - add `databricks.yml` there + - add `script` with commands to run, e.g. `$CLI bundle validate` The test runner will run script and capture output and compare it with output.txt file. -In order to write output.txt for the first time or overwrite it with the current output, set TESTS_OUTPUT=OVERWRITE env var. +In order to write output.txt for the first time or overwrite it with the current output, set `TESTS_OUTPUT=OVERWRITE` env var. -The scripts are run with "bash -e" so any errors will be propagated. They are captured in output.txt by appending "Exit code: N" line at the end. +The scripts are run with `bash -e` so any errors will immediatelly stop the script. The exit code will be captured in `output.txt` by appending `Exit code: N` line at the end. For more complex tests one can also use: -- errcode helper: prefix your command if it fails with non-zero code to append "Exit code: N" to its output instead of failing the test. -- trace helper: prefix your command to output the arguments before output -- custom output files: redirect output to custom file (it must start with out), e.g. "$CLI bundle validate > out.txt 2> out.error.txt" +- `errcode` helper: prefix your command with `errcode` to append "Exit code: N" to its output instead of failing the test in case your command returns non-zero exit code. +- `trace` helper: prefix your command with `trace` to output the arguments before output +- custom output files: redirect output to custom file (it must start with out), e.g. "$CLI bundle validate > out.txt 2> out.error.txt". Those files will be also checked by test runner.