Test Runners (A Simple Introduction)

A typical test automation framework consists of a collection of tests, a code library that handles test action abstractions as well as any supporting utilities, models for interacting with the applications under test, and a test runner.

The core jobs of the test runner are to:

  1. collect the tests to be run;
  2. run the collected tests;
  3. report on the results of test execution.

Let's look at these responsibilities, focusing on test runners that follow the xunit pattern. But first, what's a test?

Note: I will use Python and the test runner pytest in my examples, and I will run pytest from the command line (as opposed to running it within a code editor).

What is a Test?

In the context of test automation, a test is a function or method (depending on the programming language being used) that has an xunit-specified name and structure.

For example, the name of the method or function will probably contain the string test; for example in Python:

def test_carrot_is_a_fruit():
    fruits = ['apple', 'pear', 'tomato']
    assert 'carrot' in fruits

test_carrot_is_a_fruit is a test method, and the test runner (in this example, pytest) knows this because it (pytest) is by default configured to identify every method that is prefixed by test as a test method.

Test Collection

The test runner takes a "bucket" of tests, and runs them. The test runner puts tests into this bucket through the processes of test discovery and test collection.

The test runner is configured with rules defining where to look for tests, but default rules can be overridden when the test runner is called[1]. For example, by default the test runner could be configured to look in a tests directory, and any child directory of tests, for example tests/integration. At the command line, when invoking the test runner, you could specify discovery only in a particular file, for example:

$ pytest tests/integration/test_api.py

This would find only the test methods in that file test_api.py.

So test discovery adds the found tests to your bucket. Then test collection allows you to remove some tests from that bucket based on further test collection parameters. For example:

$ pytest tests/integration -k api -m smoke

Pytest treats -k <string> as a keyword lookup in the test file, test class name, or test method name. Pytest supports test method markers/decorators, which can be accessed as -m <string>.

Here, we are discovering every test in the directory test/integration, and removing every test that does NOT match the keyword "api" AND the testcase marker "smoke".

Different xunit flavors and their language platforms will be implemented variously, and may handle the discovery and collection steps differently. Pytest handles collection subtractively.

So test discovery and test collection yield a bucket of tests that will be run by the test runner.

Running Tests

The test runner runs the collected tests in your tests "bucket" by calling each test method in order; the specific order depends on the particular implementation of the test runner.

When the test runner is invoked, tests are discovered and collected, with the possible up-front outcomes of:

  1. The test runner chokes on a code syntax error (with your test code or in your supporting framework code) and dies. This probably happens during the initial test discovery and collection before any tests are run, so it's not actually a test result.
  2. No tests are collected.

Assuming that tests are successfully collected, then when each test method is called, that code executes, with three possible results:

  1. The test fails because an assert in the test method fails. Assertions -- evaluations of true-ness -- are central to the design of test methods. In pytest, the assert returns an AssertionError, and the test runner returns a test status of FAILED.
  2. The test fails because code called in the test method raises some other kind of code exception or non-syntax error. The test runner returns a test status of FAILED, and the test has failed, but not necessarily because one of the test points failed.
  3. The test passes because the test method did not raise an exception, whether from assertions or from other possible failures. The test runner returns a test status of PASSED. This means that a test can pass if it has no assertions, even though assertions are central to the design of test cases; this is a prominent source of false positives during testing.

Reporting on Results

While the test runner is executing collected tests, as each test is executed and completes its run, the result is collected by the test runner. Typically that incremental result is displayed at the command line. When all of the collected tests are completed, the test runner typically generates a test results report and writes that file to an expected or specified location.

Most test runners support customization of the data in and format of test result reports.

Notes

  1. Unlike some test runners that require test suite files to which test cases are manually added, pytest relies on test collection. Pytest configuration defines what a test class is (e.g., class name ends with test), what a test method (e.g., method name starts with test_), where the tests live in your framework, what markers are supported; based on this guidance pytest parses your framework code to create an abstract syntax tree in order to find and collect the tests cases defined by your invocation. With pytest, you have tremendous flexibility in discovering and collecting precise sets of test cases to run; you aren't tied to static test suites.