1 minute read


To gain experience using Pytest I am going to write code that works like a calculator.

I will test it using pytest.

Then I’ll automate and streamline the test and delivery process.

You can find more articles of this series with the Pytest tag


Group test functions

Pytest.mark let’s testers mark test function with custom metadata.

One scenario where this is desirable is when splitting tests into different suites.

Via cli flags you can let pytest know that it should only run an specific subset of tests.


@pytest.mark.nameofsubset

The next command would run the tests defined under the nameofsubset fixture mark.


pytest -v -m nameofsubset

You can algo run everything but an specific subset


pytest -v -m "not webtest"

Skip tests

The simplest way to skip a test function is to mark it with the skip decorator which may be passed an optional reason


@pytest.mark.skip(reason="no way of currently testing this")

Conditional skip

If you wish to skip something conditionally then you can use skipif instead. Here is an example of marking a test function to be skipped when run on an interpreter earlier than Python3.6:


@pytest.mark.skipif(sys.version_info < (3, 7), reason="requires python3.7 or higher")

XFail: expected to fail

You can use the xfail marker to indicate that you expect a test to fail:

This test will run but no traceback will be reported when it fails. Instead, terminal reporting will list it in the “expected to fail” (XFAIL) or “unexpectedly passing” (XPASS) sections.


@pytest.mark.xfail
def test_function():
    ...

Alternatively, you can also mark a test as XFAIL from within the test or its setup function imperatively


def test_function2():
    import slow_module

    if slow_module.slow_function():
        pytest.xfail("slow_module taking too long")

Condition parameter

If a test is only expected to fail under a certain condition, you can pass that condition as the first parameter


@pytest.mark.xfail(sys.platform == "win32", reason="bug in a 3rd party library")

Reason parameter

You can specify the motive of an expected failure with the reason parameter


@pytest.mark.xfail(reason="known parser issue")

Raises parameter

If you want to be more specific as to why the test is failing, you can specify a single exception, or a tuple of exceptions


@pytest.mark.xfail(raises=RuntimeError)