13.1. Testing#

13.1.1. Pytest on steroids: Parallelize your test with pytest-xdist#

Your test suite takes a very long time to run in Python?

With pytest-xdist installed, you can run your tests in parallel.

Specify the number of CPUs you want to use with --numprocesses.

This allows you to speed up your test executions.

!pip install pytest-xdist
!pytest --numprocesses 4

13.1.2. Shuffle the order of your tests with pytest-randomly#

Sometimes it is interesting to run your test cases in a random order to check if there is any test case order dependency.

By running test cases in a random order, you can check if any test cases are dependent on the order of execution.

To do that in Python, try pytest-randomly.

pytest-randomly is a pytest plugin to randomly shuffle the order of your tests.

It can be helpful in addition to other test strategies.

  • The output tells you which random seed was used.

  • If you want to use that seed again, use the flag --randomly-seed.

!pip install pytest-randomly
!pytest 

13.1.3. Get Test coverage with pytest-cov#

Do you want to measure the test coverage of your code in Python?

Try pytest-cov.

pytest-cov is a pytest plugin producing test coverage reports for you.

You can see how many statements in your code are covered in a nicely generated report.

Below you can see an example of how to use pytest-cov.

  • With the –𝐜𝐨𝐯 flag, you set the path to the module or package you want to measure coverage for.

-You can also specify the minimum required test coverage percentage using the --cov-fail-under flag.

!pip install pytest-cov
!pytest --cov=src --cov-fail-under=90
-------------------- coverage: ... ---------------------
Name                 Stmts   Miss  Cover
----------------------------------------
src/__init__             2      0   100%
src/module1.py         257     13    94%
src/module2.py         100      0   100%
----------------------------------------
TOTAL                  359     13    97%

13.1.4. Test your plots with pytest-mpl#

How to test your plots in Python?

Nowadays, you can test everything.

Functions, classes, websites, …

But how to make sure to check if your plots are correctly generated in Python?

Try pytest-mpl!

pytest-mpl is a Pytest plugin for testing plots created using Matplotlib.

It allows you to compare the output of your Matplotlib plots with expected results by automatically saving them as images and comparing them with pre-saved “baseline” images using image diffing techniques.

Below, you can see an example of how to use pytest-mpl.

  • You have to mark the function where you want to compare images with @pytest.mark.mpl_image_compare.

  • Provide the --mpl-generate-path option with the name of the directory where the baseline images should be saved.

  • To test if the images are the same, provide the --mpl option.

!pip install pytest-mpl
# testfile.py
import pytest
import matplotlib.pyplot as plt

@pytest.mark.mpl_image_compare()
def test_plotting_line():
    fig = plt.figure()
    plt.plot([1,2,3,4,5,6,7,8])
    plt.xlabel('X Axis')
    plt.ylabel('Y Axis')
    
    return fig
!pytest -k test_plotting_line --mpl-generate-path=baseline
!pytest --mpl

13.1.5. Instantly show errors in your Test Suite#

When you run your tests with pytest, it will run all test cases and show you the results at the end.

But you don’t want to wait until the end to see if some tests failed. You want to see the failed tests instantly.

pytest-instafail is a plugin which shows failures immediately instead of waiting until the end.

See below for an example of how to install and use it.

!pip install pytest-instafail
!pytest --instafail

13.1.6. Limit pytest’s output to a minimum#

Do you want to reduce pytest’s chatty output?

Try pytest-tldr.

pytest-tldr is a pytest plugin to limit the output to the most important things.

A nice plugin if you don’t want to be annoyed by pytest’s default output.

!pip install pytest-tldr
!pytest -v # -v for detailed but clean output

13.1.7. Property-based Testing with hypothesis#

Looking for a smarter way to test your Python code?

Use hypothesis.

With hypothesis, you define properties your code should uphold, and it generates diverse test cases, uncovering edge cases and unexpected bugs.

I encourage you to look into their documentation since it’s really upgrading your testing game.

!pip install hypothesis
from hypothesis import given, strategies as st

@given(st.integers(), st.integers())
def test_addition_commutative(a, b):
    assert a + b == b + a

13.1.8. Mocking Dependencies with pytest-mock#

Testing is an essential part of Software projects.

Especially unit testing, where you test the smallest piece of code that can be isolated.

They should be independent, and fast & cheap to execute.

But, what if you have some dependencies like API calls or interactions with databases and systems?

Here’s where mocking comes into play.

Mocking allows you to replace dependencies and real objects with fake ones which mimic the real behavior.

So, you don’t have to rely on the availability of your API, or ask for permission to interact with a database, but you can test your functions isolated and independently.

In Python, you can perform mocking with pytest-mock, a wrapper around the built-in mock functionality of Python.

See the example below, where we mock the file removal functionality. We can test it without deleting a file from the disk.

import os

class UnixFS:
    @staticmethod
    def rm(filename):
        os.remove(filename)
def test_unix_fs(mocker):
    mocker.patch('os.remove')
    UnixFS.rm('file')
    os.remove.assert_called_once_with('file')

13.1.9. Freeze Datetime Module For Testing with freezegun#

When you want to test functions with datetime,

Consider using freezegun for Python.

freezegun mocks the datetime module which makes testing deterministic.

See below how we can specify a date and freeze the return value of datetime.datetime.now().

!pip install freezegun
from freezegun import freeze_time
import datetime

@freeze_time("2015-02-20")
def test():
    assert datetime.datetime.now() == datetime.datetime(2015, 2, 20)

13.1.10. Mock AWS Services with moto#

If you have worked with AWS Services and Python,

you know how difficult testing your code can be.

With moto, you can easily mock out AWS Services and write your tests without headaches.t

Note: Not all services are covered, so check out the implementation coverage in their repository.

import boto3
from moto import mock_aws

class MyModel:
    def __init__(self, name, value):
        self.name = name
        self.value = value

    def save(self):
        s3 = boto3.client("s3", region_name="us-east-1")
        s3.put_object(Bucket="mybucket", Key=self.name, Body=self.value)

@mock_aws
def test_my_model_save():
    conn = boto3.resource("s3", region_name="us-east-1")
    conn.create_bucket(Bucket="mybucket")
    model_instance = MyModel("test", "testtest")
    model_instance.save()
    body = conn.Object("mybucket", "test").get()["Body"].read().decode("utf-8")
    assert body == "testtest"

13.1.11. Clean Output Diff with pytest-clarity#

Improving the output of pytest with one plugin?

Use pytest-clarity.

It brings a coloured diff output which is much cleaner than pytest’s standard output.

!pip install pytest-clarity
!pytest -vv --diff-width=60

13.1.12. Using Environment Variables with pytest-env#

How to use environment variables when testing your code?

It’s important to not mix up your test environment and local environment, especially when you want to define environment variables specifically for your tests.

For this case, use pytest-env.

pytest-env is a pytest plugin to define your environment variables in a pytest.ini file.

Those variables will be isolated from the local environment, perfect for writing and running tests.

!pip install pytest-env
# pytest.ini
[pytest]
env =
    API_KEY=example-key
    API_ENDPOINT=https://example.endpoint.net
    
# test_example.py
import os

def test_load_env_vars():
    assert os.environ["API_KEY"] == "example-key"
    assert os.environ["API_ENDPOINT"] == https://example.endpoint.net"