Pytest — part 2

Lalitha
Analytics Vidhya
Published in
7 min readDec 9, 2020

--

This blog is continuation to my pytest beginner guide blog.

Photo by ThisisEngineering RAEng on Unsplash

Topics to be covered here are,

  • Grouping Tests
  • Creating pytest.ini file
  • Custom Markers
  • Built-in Markers
  • Why python stands out
  • Parametrizing markers
  • Parallel Testing
  • Stop after ’N’ failures
  • Reporting

Grouping Tests -

In pytest we can group tests using markers on test functions. Here comes the beauty of pytest, we can use inbuilt markers or we can create our own markers if needed. Most widely used markers are parametrize, xfail and skip. Syntax to create a custom marker is,

@pytest.mark.<markername>

Here, @ symbol depicts as a decorator and in order to use markers we have to first import pytest module with command import pytest.

Photo by Tyler Milligan on Unsplash

There are two types of markers in pytest. They are,

  • Built-in markers
  • Custom markers

Marks can only be applied to tests, having no effect on fixtures.

To display all the markers available (includes both built-in and custom) command used is,

pytest --markers

Example code,

import pytest@pytest.mark.concat
def test_strConcat(input_value):
assert "Hello " + input_value == "Hello python"

Here concat is custom marker assigned to function to perform string concatenation operation.

To execute particular tests with a specific marker, use the following command.

pytest -m <markername> -v

Example,

pytest -m concat -v

This command, fetches all the tests which ar marked with concat and executes those tests.

If we have custom markers, they need to be registered. For registering we have to create a file named as pytest.ini.

Creating pytest.ini file

INI stands for initialization. From its name we figure it out that it holds configuration information. It contains text content, so that we can read easily, this helps if a new programmer to get into the concept. It follows a structure and we have to add our custom marker names.

From above picture you can see, Test is my package and pytest.ini is the file created.

Sample structure of pytest.ini file can be as follows,

[pytest]markers =
add : to add numbers
sub : to subtract numbers
concat : To concatenate strings

Here, add, sub, concat are custom markers and usage of each marker is defined for users reference and it is optional.

Pytest provides an excellent feature called strict markers, through which we can restrict the execution of a test if marker is not registered.

This can be done by using, addopts = — strict -markers.

[pytest]addopts = --strict -markers
markers =
add : to add numbers
sub : to subtract numbers
concat : To concatenate strings

Built — in Markers

Pytest provides some built-in markers add in them most commonly used are skip , xfail , parametrize ,incremental etc.

skip — Always skip a test function

Syntax ,

pytest -m skip filename.py

xfail — produce an “expected failure” outcome if a certain condition is met.

pytest -m xfail filename.py

parametrize — Parameterizing of a test is done to run a test with multiple sets of inputs / data.

Why pytest stands out -

Now lets consider a situation where a test is not relevant for some time, here python provides an excellent marker called xfail. We are provided with an option that we can either fail a test or we can skip a test.

Here, pytest actually excutes the xfailed test, but it is not treated as a passed or a failed tests and more over details of the tests are not printed even if the test fails.

We can xfail tests using the following marker −

@pytest.mark.xfail

Example,

import pytest@pytest.mark.skip
def test_upper(input_value):
assert input_value.upper() == "PYTHON"
@pytest.mark.xfail
def test_lower(input_value):
assert input_value.lower() == "python"

Execute the test using the following command −

pytest test_customMark.py -v

Upon execution, the above command will generate the following result −

========================= test session starts =========================
platform win32 -- Python 3.9.0, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 -- c:\users\pycharmprojects\testing\venv\scripts\python.exe
cachedir: .pytest_cache
metadata: {'Python': '3.9.0', 'Platform': 'Windows-10-10.0.18362-SP0', 'Packages': {'pytest': '6.1.2', 'py': '1.9.0', 'pluggy': '0.13.1'}, 'Plugins': {'forked': '1.3.0', 'html': '2.1.1', 'metadata': '1.10.0', 'xdist': '2.1.0'}}
rootdir: C:\Users\PycharmProjects\Testing\Test, configfile: pytest.ini
plugins: forked-1.3.0, html-2.1.1, metadata-1.10.0, xdist-2.1.0
collected 2 items / 2 deselected
======================= 2 deselected in 0.05s ========================

if we analyze the execution, two test functions are executed but not displays as selected.

Parameterizing markers

Parameterizing of a test is done to run a test with multiple sets of inputs / data. Pytest provides a in-built marker in order to perform parameterization.

This can be done using parametrize decorator.

@pytest.mark.parametrize

creating a parametrized fixture,

@pytest.mark.parametrize("variable1, variable2",
[(dataset1),(dataset2)]
)
def function_name(variable1, variable2):
** --- assertion here --- **

Here, variable1 and variable2 are two argument names.

dataset1 and dataset2 are two argument values.

Now lets see how this works with same string concatenation operations function as an example,

import pytest
@pytest.mark.parametrize("str1,str2,result",
[("Open ", "source", "Open source"),
("Hello ", "World", "Hello World")
])
def test_strConcat(str1, str2, result):
assert str1 + str2 == result

Output -

collected 2 items                                                                                                                                                                                                                         
test_Parameterized.py .. [100%]
==================================== 2 passed in 0.04s =====================================

Parallel testing

Parallel testing means we execute tests by splitting the number of processors, called workers. We can assign multiple workers and execute.

Photo by Jan Canty on Unsplash

Pytest runs all the test files in an order. While executing bunch of tests, it automatically increases execution time. In this scenario, parallel testing helps to run tests in parallel.

For this, we have to install a plugin named,

pytest-xdist

execute the following command in command line -

pip install pytest-xdist

number can be specified as,

pytest -n <num>

Example,

pytest -n 2

Output,

(venv) C:\Users\user1\PycharmProjects\Testing\Test>pytest -n 2
===================== test session starts =============================
platform win32 -- Python 3.9.0, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: C:\Users\user1\PycharmProjects\Testing\Test, configfile: pytest.ini
plugins: forked-1.3.0, html-2.1.1, metadata-1.10.0, xdist-2.1.0
gw0 [0] / gw1 [0]
=================== no tests ran in 0.75s ============================

if we analyze the executed test code, internally tests are grouped and executed. A swe mention 2 , tests are grouped as gw0[0] / gw1[0]

gw stands for group worker.

example 2,

pytest -n 4

output,

(venv) C:\Users\PycharmProjects\Testing\Test>pytest -n 4
======================= test session starts ===========================
platform win32 -- Python 3.9.0, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: C:\Users\PycharmProjects\Testing\Test, configfile: pytest.ini
plugins: forked-1.3.0, html-2.1.1, metadata-1.10.0, xdist-2.1.0
gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0]
======================= no tests ran in 0.96s =========================

here tests are grouped into 4 , as gw0[0], gw1[0], gw2[0] , gw3[0]

If we want to see what tests are executed, we can try with -v verbose option as

pytest -n 4 -v

example,

(venv) C:\Users\user1\PycharmProjects\Testing\Test>pytest -n 4 -v
======================== test session starts =========================
platform win32 -- Python 3.9.0, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 -- c:\users\pycharmprojects\testing\venv\scripts\python.exe
cachedir: .pytest_cache
metadata: {'Python': '3.9.0', 'Platform': 'Windows-10-10.0.18362-SP0', 'Packages': {'pytest': '6.1.2', 'py': '1.9.0', 'pluggy': '0.13.1'}, 'Plugins': {'forked': '1.3.0', 'html': '2.1.1', 'metadata': '1.10.0', 'xdist': '2.1.0'}}
rootdir: C:\Users\PycharmProjects\Testing\Test, configfile: pytest.ini
plugins: forked-1.3.0, html-2.1.1, metadata-1.10.0, xdist-2.1.0
[gw0] win32 Python 3.9.0 cwd: C:\Users\user1\PycharmProjects\Testing\Test
[gw1] win32 Python 3.9.0 cwd: C:\Users\user1\PycharmProjects\Testing\Test
[gw2] win32 Python 3.9.0 cwd: C:\Users\user1\PycharmProjects\Testing\Test
[gw3] win32 Python 3.9.0 cwd: C:\Users\use1\PycharmProjects\Testing\Test
[gw0] Python 3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)]
[gw1] Python 3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)]
[gw2] Python 3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)]
[gw3] Python 3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)]
gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0]
scheduling tests via LoadScheduling

this will give detailed view what are tests executed by every worker.

Stop after N failures

This functionality allows to stop testing further after ’n’ failures are occurred.

In a real scenario, once a new version of the code is ready to deploy, it is first deployed into pre-prod/staging environment. Then a test suite runs on it.

The code is qualified for deploying to production only if the test suite passes. If there is test failure, whether it is one or many, the code is not production ready.

Therefore, what if we want to stop the execution of test suite soon after n number of test fails. This can be done in pytest using maxfail.

The syntax to stop the execution of test suite soon after n number of test fails is as follows −

pytest --maxfail = <num>

To stop testing after first failure,

pytest -x

To stop testing after first failure,

pytest --exitfirst

Create a file test_failure.py with the following code.

pytest --exitfirst 2

All the tests will fail on executing this test file. Here, we are going to stop the execution of the test after two failures.

Reporting

First step to perform this action is to installing the html package using following command,

pip install pytest-html — in command prompt to install all packages to generate a html file.

Then execute this command,

pytest — html=report.html

Once we execute this command, HTML or XML files can be seen in our project tree.

This is how an XML or a HTML reports are created for our tests.

Thanks for reading…

--

--