» Overview | Assertions | Fixtures | Configuration/Customization
Documentation:
- Standard Homepage/Documentation
- Reference: Functions, objects, variables, configuration options
- Examples and customization tricks
- Restructed Documentation (sometimes better; kind of a secret!)
pytest (docs, wiki) uses Python's assert
statement
instead of assertion functions and introspection to display values
that fail assertions:
def test():
expected = { 1: 'one', 2: 'two', }
actual = { 1: 'ichi', 3: 'san', }
> assert expected == actual, 'optional message'
E AssertionError: optional message
E assert {1: 'one', 2: 'two'} == {1: 'ichi', 3: 'san'}
E Differing items:
E {1: 'one'} != {1: 'ichi'}
E Left contains more items:
E {2: 'two'}
E Right contains more items:
E {3: 'san'}
E Use -v to get the full diff
test.py:2: AssertionError
See the reporting demo for more examples of how failures
are displayed, and assertions for details of how to define
pytest_assertrepr_compare(
config, op, left, right)
to define
custom comparisons and failure displays.
Under the hood, pytest modifies the AST of test modules as they are loaded using PEP 302 import hooks. Can this cause issues with test frameworks that do their own loading?
See Test Discovery for details of how to name test functions and files.
Pytest has at least two different kinds of scoping.
Run scoping (my term) is one of session, module, class or function and refers to particular start and end points during a test session.
- A run scope can be passed (as
scope=...
) to@pytest.fixture
to determine when a fixture is to be torn down. - Various hooks are called at their defined points in the run scope.
Package or directory scoping is used by config.py
files; fixtures
and hooks in a config.py
are used only by modules in that directory
and below. (XXX confirm this.)
Most of the following would normally be done with decorators; see below.
fail(msg='', pytrace=True)
: Fail test with msg; no stack traceback if pytrace isFalse
.skip(msg='')
: Skip (the rest of) the current test.importorskip(modname, minversion=None)
: Checks version attribute of module and skips if too low or can't load.xfail(reason='', strict=False, raises=None, run=True)
: Allows test failure. Enforces if strict. Limit allowed exceptions with raises. Doesn't run (e.g., to avoid segfault) if not run.exit()
: Exit test process.
pytest --markers
will list available markers.
Markers are set with a @pytest.mark.MARKERNAME
decorator on the
test function. Unknonwn markers are normally ignored; use --strict
on the command line to raise an error instead.
Condition strings for condition parameters below are deprecated; use an expression evaluating to a boolean instead.
These may also be used programmatically within the test function; see 'Test Outcomes' above. See skip for more details.
skip(*, reason=None)
: Skip the test.skipif(condition, *, reason=None)
: Skip if condition.xfail(condition=None, *, reason=None, raises=None, run=True, strict=False)
: Expect failure, noting test as 'xfailed' or 'xpassed'.- condition: Not
None
and false, run the test as normal. - If raises passed, test still fails with any exception other than the expected one.
- If not run, don't run the test at all (in case it crashes interpreter).
- If strict, fail the test if it passes instead of noting 'xpass'.
- condition: Not
You can parametrize test functions; they will be run multiple times (each
as an individual test) with the given parameters. (Parametrization also
occurs with fixtures with a params
argument, once for each value
in the sequence. See fixtures for details.)
parametrize(argnames, argvalues, indirect=False, ids=None, scope=None)
- argnames: comma-separated list of argnames to be bound with each function invocation
- argvalues: List (sequence?) of values if one argname; list (sequence?) of lists (sequences?) of values if multiple argnames.
To xfail or otherwise mark a single parametrized value, directly build
a pytest.param
with the mark set:
[ (1, 2), pytest.param(3, 4, marks=pytest.mark.xfail(reason='blah'), ... ]
Parametrized Test Names
Parameterization generates a new test for each set of parameters; the names
(test IDs) will be generated automatically based on the parameter values.
(These names can be shown with the --collect-only
option, and are usually
used for selection with the -k
option.)
The standard names are test_foo[…-…]
, with parameter values in brackets
and separated by hyphens. You can use parametrize(ids=['a', 'b', …])
to
replace the content in brackets; the sequence must be the same length as
the sequence specifying the parameters. You can also pass ids=f
where f
is a function that will receive each parameter in turn and may return a
string to represent it, or None
to let pytest do it's default. (You
cannot tell which parameter is being passed; checking with type()
may be
helpful.)
For full details, see Working with custom markers.
You can add arbitrary markers by simply making up a marker name, e.g.
@pytest.mark.foo
. It's also possible to automatically add markers
based on test names
Adding markers can be done at many levels:
# Function level, in the usual way
@pytest.mark.foo
def test_one(): ...
# Individual parameterized runs
@pytest.mark.parametrize('a, b',
[ (1, 2), pytest.mark.foo((3, 4)), (5, 6), ])
def test_two(): ...
# Class level
@pytest.mark.foo
class TestStuff:
def test_three(): ...
def test_four(): ...
# Module level, by defining a global var
pytestmark = pytest.mark.foo
If you use the --strict
option, markers need to be
registered. This is done by setting markers
in
pytest.ini or similar to a sequence of lines, one for each
registered marker. The marker name is followed by a colon and its
description:
markers =
foo: A description of the foo marker
bar: A description of the bar marker
Pytest will also run unittest
and doctest
test cases.
Collection is slightly different from that for standard pytest:
- Files: the same, matching
python_files
glob - Classes: if inherits from
unittest.TestCase
(python_classes
is ignored) - Methods: if in a selected class and matching
test*
(python_functions
is ignored)
@pytest.mark
decorators can be used on unittest
test
methods. This includes things like the following. (See the
documentation linked above for considerably more detail.)
@pytest.mark.skip(reason='foo')
@pytest.mark.usefixtures("my_fixture")
Supported features:
@unittest.skip
-style decoratorssetUp()
/tearDown()
setUpClass()
/tearDownClass()
Unsupported features:
load_tests
protocolsetUpModule()
,tearDownModule()
- subtests/
subTest()
context manager
Doctest options:
--doctest-modules
: Run doctests in all .py modules.--doctest-report
: Format of doctest failure output:{none,cdiff,ndiff,udiff,only_first_failure}
--doctest-glob=PAT
: Doctests on files matching PAT. Default:test*.txt
.--doctest-ignore-import-errors
: ignore doctest ImportErrors--doctest-continue-on-failure
: Continue after first doctest failure in a file.
Doctest config options:
doctest_optionflags
: Option flags for doctests.doctest_encoding
: Encoding used for doctest files.