Python Testing with pytest: Complete Guide to Fixtures, Mocks & Best Practices

February 12, 2026 · 20 min read

Testing is the difference between code that works today and code that keeps working tomorrow. A good test suite catches regressions before your users do, documents expected behavior with concrete examples, and gives you the confidence to refactor aggressively. In the Python ecosystem, pytest has become the dominant testing framework thanks to its plain-function syntax, powerful fixture system, rich plugin ecosystem, and expressive failure messages.

This guide covers everything from pytest fundamentals through fixtures, parametrize, markers, mocking, coverage, async testing, database testing, CLI testing, CI/CD integration, and best practices. Every section includes working code examples.

⚙ Related resources: Set up isolated environments with our Python Virtual Environments Guide, review data structures in the Python Data Structures Guide, and automate your tests with the GitHub Actions CI/CD Guide.

Table of Contents

  1. Why Testing Matters and Test Types
  2. pytest Basics: Installation and Running Tests
  3. Writing Test Functions
  4. Fixtures: Setup, Teardown, and conftest.py
  5. Parametrize: Data-Driven Tests
  6. Markers: skip, xfail, and Custom Markers
  7. Mocking with unittest.mock
  8. Testing Exceptions and Temporary Files
  9. Test Doubles: Stubs, Mocks, Fakes, and Spies
  10. Project Structure and Coverage
  11. Testing Async Code
  12. Testing CLI Applications
  13. Testing with Databases
  14. CI/CD Integration
  15. Common pytest Plugins
  16. Best Practices
  17. Frequently Asked Questions

1. Why Testing Matters and Test Types

Automated tests serve as a safety net for your codebase. Without them, every change is a gamble. With a comprehensive test suite, you get immediate feedback on whether your changes are safe.

Tests exist on a spectrum. Unit tests test a single function in isolation with mocked dependencies — they run in milliseconds. Integration tests test how components work together (API with database, service layer coordination) — they are slower but catch interface issues. End-to-end tests test the full application from the user's perspective — they are slowest but verify the whole system works. The testing pyramid suggests roughly 70% unit, 20% integration, 10% E2E.

2. pytest Basics: Installation and Running Tests

# Install pytest
pip install pytest

# Run all tests in current directory
pytest

# Verbose output
pytest -v

# Run specific file, class, or function
pytest tests/test_auth.py
pytest tests/test_auth.py::TestLogin
pytest tests/test_auth.py::test_login_success

# Keyword expression: run tests matching pattern
pytest -k "login and not slow"

# Show print output (disable capture)
pytest -s

# Stop on first failure
pytest -x

# Rerun only last-failed tests
pytest --lf

pytest discovers tests automatically: files named test_*.py or *_test.py, functions prefixed with test_, and classes prefixed with Test (without __init__). Follow these conventions and pytest finds your tests with zero configuration.

3. Writing Test Functions

pytest uses plain assert statements. No special assertion methods needed — just write natural Python expressions. When an assertion fails, pytest uses introspection to show exactly what went wrong.

# calculator.py
def add(a, b):
    return a + b

def divide(a, b):
    if b == 0:
        raise ValueError("Cannot divide by zero")
    return a / b

def is_palindrome(text):
    cleaned = text.lower().replace(" ", "")
    return cleaned == cleaned[::-1]


# test_calculator.py
import pytest
from calculator import add, divide, is_palindrome

def test_add_positive_numbers():
    assert add(2, 3) == 5

def test_add_negative_numbers():
    assert add(-1, -1) == -2

def test_add_floats():
    assert add(0.1, 0.2) == pytest.approx(0.3)

def test_divide_normal():
    assert divide(10, 2) == 5.0

def test_is_palindrome_true():
    assert is_palindrome("racecar") is True

def test_is_palindrome_with_spaces():
    assert is_palindrome("nurses run") is True

def test_is_palindrome_false():
    assert is_palindrome("hello") is False

4. Fixtures: Setup, Teardown, and conftest.py

Fixtures are pytest's most powerful feature. They replace setup/teardown methods with a flexible dependency injection system.

import pytest

# Basic fixture: returns a value
@pytest.fixture
def sample_user():
    return {"name": "Alice", "email": "alice@example.com", "role": "admin"}

def test_user_has_name(sample_user):
    assert sample_user["name"] == "Alice"

# Fixture with teardown using yield
@pytest.fixture
def db_connection():
    conn = create_database_connection()  # Setup
    yield conn
    conn.close()  # Teardown (runs after test completes)

# Fixture scope: controls how often the fixture runs
@pytest.fixture(scope="session")     # once per entire test session
def app_config():
    return load_config("test_config.yaml")

@pytest.fixture(scope="module")      # once per test module
def api_client(app_config):
    return TestClient(app_config)

@pytest.fixture(scope="function")    # once per test (default)
def fresh_data():
    return {"items": []}

# Autouse: automatically applied to all tests in scope
@pytest.fixture(autouse=True)
def reset_environment():
    os.environ["APP_ENV"] = "test"
    yield
    del os.environ["APP_ENV"]

Place fixtures in conftest.py and they become available to all tests in that directory and subdirectories. No imports needed — pytest discovers conftest.py automatically.

# tests/conftest.py
@pytest.fixture
def auth_headers():
    token = generate_test_token(user_id=1)
    return {"Authorization": f"Bearer {token}"}

@pytest.fixture
def sample_products():
    return [
        {"id": 1, "name": "Widget", "price": 9.99},
        {"id": 2, "name": "Gadget", "price": 24.99},
    ]

5. Parametrize: Data-Driven Tests

@pytest.mark.parametrize runs the same test function with different inputs, eliminating copy-paste duplication.

import pytest
from calculator import add, is_palindrome

@pytest.mark.parametrize("a, b, expected", [
    (1, 2, 3),
    (0, 0, 0),
    (-1, 1, 0),
    (100, 200, 300),
    (0.1, 0.2, pytest.approx(0.3)),
])
def test_add(a, b, expected):
    assert add(a, b) == expected

# With IDs for readable test output
@pytest.mark.parametrize("text, expected", [
    ("racecar", True),
    ("hello", False),
    ("A man a plan a canal Panama", True),
    ("", True),
], ids=["palindrome", "not_palindrome", "with_spaces", "empty"])
def test_is_palindrome(text, expected):
    assert is_palindrome(text) == expected

# Multiple decorators create combinations (runs 4 times)
@pytest.mark.parametrize("x", [1, 2])
@pytest.mark.parametrize("y", [10, 20])
def test_multiplication(x, y):
    assert x * y == x * y

# Per-case markers with pytest.param
@pytest.mark.parametrize("email, valid", [
    pytest.param("user@example.com", True, id="valid"),
    pytest.param("no-at-sign", False, id="missing_at"),
    pytest.param("user@.com", False, marks=pytest.mark.xfail, id="known_bug"),
])
def test_validate_email(email, valid):
    assert validate_email(email) == valid

6. Markers: skip, xfail, and Custom Markers

import pytest
import sys

# Skip: unconditionally skip a test
@pytest.mark.skip(reason="Feature not implemented yet")
def test_future_feature():
    pass

# Skipif: skip based on a condition
@pytest.mark.skipif(sys.platform == "win32", reason="Unix-only test")
def test_unix_permissions():
    assert check_file_permissions("/tmp/test") == 0o755

@pytest.mark.skipif(sys.version_info < (3, 11), reason="Requires Python 3.11+")
def test_exception_groups():
    pass

# Xfail: expect a test to fail (known bug)
@pytest.mark.xfail(reason="Bug #1234: rounding error")
def test_precise_division():
    assert divide(1, 3) * 3 == 1.0

# Custom markers (register in pyproject.toml)
@pytest.mark.slow
def test_large_dataset_processing():
    process_million_rows()

@pytest.mark.integration
def test_api_endpoint():
    response = client.get("/api/users")
    assert response.status_code == 200

Register custom markers in pyproject.toml to avoid warnings, then filter with pytest -m "not slow" or pytest -m integration:

[tool.pytest.ini_options]
markers = [
    "slow: marks tests as slow",
    "integration: marks integration tests",
]

7. Mocking with unittest.mock

Mocking replaces real dependencies with controlled substitutes. Python's unittest.mock integrates seamlessly with pytest.

from unittest.mock import Mock, patch, MagicMock
import pytest

# --- Basic Mock ---
mock_db = Mock()
mock_db.query.return_value = [{"id": 1, "name": "Alice"}]
result = mock_db.query("SELECT * FROM users")
mock_db.query.assert_called_once_with("SELECT * FROM users")

# --- patch: replace real objects during testing ---
# service.py
import requests
def get_user_data(user_id):
    response = requests.get(f"https://api.example.com/users/{user_id}")
    response.raise_for_status()
    return response.json()

# test_service.py — patch where it is looked up, not where defined
@patch("service.requests.get")
def test_get_user_data(mock_get):
    mock_response = Mock()
    mock_response.json.return_value = {"id": 1, "name": "Alice"}
    mock_response.raise_for_status.return_value = None
    mock_get.return_value = mock_response

    result = get_user_data(1)
    assert result["name"] == "Alice"
    mock_get.assert_called_once_with("https://api.example.com/users/1")

# --- side_effect for exceptions ---
@patch("service.requests.get")
def test_api_failure(mock_get):
    mock_get.side_effect = requests.ConnectionError("Network unreachable")
    with pytest.raises(requests.ConnectionError):
        get_user_data(1)

# --- pytest-mock: cleaner syntax with mocker fixture ---
def test_with_mocker(mocker):
    mock_get = mocker.patch("service.requests.get")
    mock_get.return_value.json.return_value = {"id": 1}
    result = get_user_data(1)
    assert result["id"] == 1

8. Testing Exceptions and Temporary Files

pytest.raises verifies that code raises the expected exception:

import pytest, json
from calculator import divide

def test_divide_by_zero():
    with pytest.raises(ValueError):
        divide(10, 0)

def test_divide_by_zero_message():
    with pytest.raises(ValueError, match="Cannot divide by zero"):
        divide(10, 0)

def test_divide_by_zero_details():
    with pytest.raises(ValueError) as exc_info:
        divide(10, 0)
    assert "zero" in str(exc_info.value)

pytest provides tmp_path (function-scoped) and tmp_path_factory (session-scoped) for temporary files:

def test_write_and_read_json(tmp_path):
    data = {"users": [{"name": "Alice"}, {"name": "Bob"}]}
    file_path = tmp_path / "data.json"
    file_path.write_text(json.dumps(data))

    loaded = json.loads(file_path.read_text())
    assert loaded["users"][0]["name"] == "Alice"

def test_project_structure(tmp_path):
    src = tmp_path / "src"
    src.mkdir()
    (src / "main.py").write_text("print('hello')")
    assert (src / "main.py").read_text() == "print('hello')"

@pytest.fixture(scope="session")
def shared_data(tmp_path_factory):
    d = tmp_path_factory.mktemp("data")
    (d / "config.json").write_text('{"env": "test"}')
    return d

9. Test Doubles: Stubs, Mocks, Fakes, and Spies

The term "mock" is used loosely, but there are distinct types of test doubles:

from unittest.mock import Mock

# Stub: just returns data
def test_with_stub():
    price_service = Mock()
    price_service.get_price.return_value = 29.99
    total = calculate_order_total(price_service, quantity=3)
    assert total == 89.97

# Mock: verifies interactions
def test_notification_sent():
    notifier = Mock()
    process_order(order_id=42, notifier=notifier)
    notifier.send_email.assert_called_once_with(
        to="customer@example.com", subject="Order #42 confirmed")

# Fake: in-memory implementation
class FakeUserRepo:
    def __init__(self):
        self.users = {}
    def save(self, user):
        self.users[user.id] = user
    def find(self, user_id):
        return self.users.get(user_id)

def test_with_fake():
    repo = FakeUserRepo()
    service = UserService(repo)
    service.register(User(id=1, name="Alice"))
    assert repo.find(1).name == "Alice"

# Spy: wraps real object (pytest-mock)
def test_with_spy(mocker):
    real_service = EmailService()
    spy = mocker.spy(real_service, "send")
    process_order(order_id=42, notifier=real_service)
    spy.assert_called_once()  # real send was also invoked

10. Project Structure and Coverage

# Recommended layout
my_project/
    src/my_package/
        __init__.py
        auth.py
        services/
            user_service.py
    tests/
        conftest.py                  # shared fixtures
        test_auth.py
        services/
            conftest.py              # service-specific fixtures
            test_user_service.py
        integration/
            test_api_endpoints.py
    pyproject.toml

Group tests by module, use conftest.py at each level for shared fixtures, separate unit and integration tests, and keep test filenames matching source files.

Coverage Reporting with pytest-cov

pip install pytest-cov

# Terminal report with missed lines
pytest --cov=src/my_package --cov-report=term-missing

# HTML report
pytest --cov=src/my_package --cov-report=html

# Fail if coverage drops below threshold
pytest --cov=src/my_package --cov-fail-under=80

# Multiple formats for CI
pytest --cov=src/my_package --cov-report=term-missing --cov-report=xml
# pyproject.toml
[tool.coverage.run]
source = ["src/my_package"]
omit = ["*/migrations/*", "*/tests/*"]

[tool.coverage.report]
exclude_lines = ["pragma: no cover", "if __name__ == .__main__.", "if TYPE_CHECKING:"]

Aim for 80-90% coverage on core business logic. Do not chase 100% — focus coverage on code with complex logic and business rules.

11. Testing Async Code

# pip install pytest-asyncio
import pytest
from unittest.mock import AsyncMock

@pytest.mark.asyncio
async def test_fetch_user():
    mock_client = AsyncMock()
    mock_client.get.return_value.json.return_value = {"id": 1, "name": "Alice"}

    result = await fetch_user(mock_client, 1)
    assert result["name"] == "Alice"
    mock_client.get.assert_awaited_once_with("/users/1")

@pytest.mark.asyncio
async def test_process_batch():
    results = await process_batch([1, 2, 3])
    assert len(results) == 3

# Async fixtures
@pytest.fixture
async def async_db():
    db = await create_async_connection()
    yield db
    await db.close()

Set asyncio_mode = "auto" in pyproject.toml to avoid marking every async test individually.

12. Testing CLI Applications

# cli.py (using Click)
import click

@click.command()
@click.argument("name")
@click.option("--greeting", default="Hello")
def greet(name, greeting):
    click.echo(f"{greeting}, {name}!")

# test_cli.py
from click.testing import CliRunner
from cli import greet

def test_greet_default():
    runner = CliRunner()
    result = runner.invoke(greet, ["Alice"])
    assert result.exit_code == 0
    assert "Hello, Alice!" in result.output

def test_greet_custom():
    result = CliRunner().invoke(greet, ["Bob", "--greeting", "Hi"])
    assert result.exit_code == 0
    assert "Hi, Bob!" in result.output

def test_greet_missing_argument():
    result = CliRunner().invoke(greet, [])
    assert result.exit_code != 0

def test_cli_with_file():
    runner = CliRunner()
    with runner.isolated_filesystem():
        with open("data.txt", "w") as f:
            f.write("test data\n")
        result = runner.invoke(process_file, ["data.txt"])
        assert result.exit_code == 0

13. Testing with Databases

Database tests need careful fixture management for isolation. The key pattern: begin a transaction before each test, yield the session, roll back afterward.

import pytest
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker

@pytest.fixture(scope="session")
def engine():
    engine = create_engine("sqlite:///:memory:")
    Base.metadata.create_all(engine)
    yield engine
    engine.dispose()

@pytest.fixture
def db_session(engine):
    """Transactional session that rolls back after each test."""
    connection = engine.connect()
    transaction = connection.begin()
    session = sessionmaker(bind=connection)()
    yield session
    session.close()
    transaction.rollback()
    connection.close()

@pytest.fixture
def seed_users(db_session):
    users = [User(name="Alice", email="alice@example.com"),
             User(name="Bob", email="bob@example.com")]
    db_session.add_all(users)
    db_session.flush()
    return users

def test_find_user(db_session, seed_users):
    repo = UserRepository(db_session)
    user = repo.find_by_email("alice@example.com")
    assert user.name == "Alice"

def test_create_user(db_session):
    repo = UserRepository(db_session)
    repo.create(name="Charlie", email="charlie@example.com")
    assert db_session.query(User).count() == 1

14. CI/CD Integration

GitHub Actions

# .github/workflows/test.yml
name: Tests
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        python-version: ["3.10", "3.11", "3.12"]
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: ${{ matrix.python-version }}
      - run: pip install -e ".[test]"
      - run: pytest --cov=src --cov-report=xml -v
      - uses: codecov/codecov-action@v4
        with:
          file: coverage.xml

GitLab CI

# .gitlab-ci.yml
test:
  image: python:3.12
  script:
    - pip install -e ".[test]"
    - pytest --cov=src --cov-report=xml --junitxml=report.xml -v
  artifacts:
    reports:
      junit: report.xml

For a deeper guide on CI/CD pipelines, see our GitHub Actions CI/CD Complete Guide.

15. Common pytest Plugins

# Parallel execution across CPU cores
pip install pytest-xdist
pytest -n auto              # auto-detect cores

# Cleaner mocking with mocker fixture
pip install pytest-mock

# Timeout for hanging tests
pip install pytest-timeout
pytest --timeout=30         # 30 seconds per test

# Randomize test order (finds order-dependent bugs)
pip install pytest-randomly

# Repeat tests to find flaky failures
pip install pytest-repeat
pytest --count=5            # run each test 5 times

# Better diffs for large data structures
pip install pytest-clarity

# Environment variable management
pip install pytest-env

pytest-xdist is particularly valuable for large suites. It distributes tests across CPU cores, often cutting run time by 50-80%. Tests must be isolated (no shared mutable state) for parallel execution to work.

16. Best Practices

Follow the AAA Pattern

Structure every test: Arrange (set up preconditions), Act (perform the action), Assert (verify the result).

def test_apply_discount():
    # Arrange
    order = Order(items=[Item("Widget", 100), Item("Gadget", 50)])
    discount = PercentageDiscount(10)

    # Act
    result = discount.apply(order)

    # Assert
    assert result.total == 135.0
    assert result.discount_applied is True

Use Descriptive Test Names

# Bad
def test_user():
def test_login1():

# Good: describe scenario and expected outcome
def test_login_with_valid_credentials_returns_token():
def test_login_with_wrong_password_returns_401():
def test_expired_token_is_rejected():
def test_empty_cart_has_zero_total():

Key Principles

Frequently Asked Questions

What is the difference between pytest and unittest?

unittest is Python's built-in framework using class-based xUnit patterns with self.assertEqual(). pytest uses plain functions and assert, making tests shorter and more readable. pytest provides fixtures with dependency injection, parametrize for data-driven tests, and a rich plugin ecosystem. pytest runs unittest-style tests natively, so migration is incremental. Most teams prefer pytest for new projects.

How do pytest fixtures work?

Decorate a function with @pytest.fixture. When a test declares a parameter with the same name, pytest calls the fixture and passes its return value. Use yield for teardown. Fixtures have scope (function, class, module, session). Place shared fixtures in conftest.py for automatic discovery. Fixtures can depend on other fixtures, and pytest resolves the graph automatically.

How do I mock external dependencies in pytest?

Use unittest.mock.patch or pytest-mock. Patch where the object is looked up, not where defined: if module_a imports requests, patch 'module_a.requests.get'. Set return values with mock.return_value and errors with mock.side_effect. The pytest-mock plugin provides a mocker fixture with automatic cleanup after each test.

How do I run only specific tests in pytest?

By file: pytest tests/test_auth.py. By function: pytest tests/test_auth.py::test_login. By keyword: pytest -k "login and not slow". By marker: pytest -m "not integration". Rerun failures: pytest --lf. These options combine for precise control.

What is the AAA pattern in testing?

Arrange-Act-Assert. Arrange sets up preconditions. Act performs the action. Assert verifies the outcome. Clear AAA sections make tests readable and debuggable. pytest fixtures handle cleanup automatically, so a fourth "cleanup" step is rarely needed.

How do I measure test coverage in Python?

Install pytest-cov and run pytest --cov=your_package. Add --cov-report=html for interactive reports, --cov-report=term-missing for terminal output. Set --cov-fail-under=80 to enforce minimums in CI. Aim for 80-90% on core logic. Coverage measures line execution, not assertion quality.

Conclusion

A well-structured pytest test suite is one of the highest-leverage investments you can make in a Python project. The patterns covered here — fixtures for clean setup, parametrize for data-driven tests, markers for selection, mocking for isolation, coverage for visibility, and the AAA pattern for readability — give you a complete toolkit for testing applications of any size.

Start small: write tests for the next function you build, using plain assert statements and a fixture or two. Once that feels natural, add parametrize for edge cases, integrate coverage into your CI pipeline, and gradually mock external dependencies. The most important thing is the habit of writing tests alongside your code and running them before every commit.

⚙ Keep building: Set up virtual environments with our Python Virtual Environments Guide, automate your test pipeline with the GitHub Actions CI/CD Guide, and explore data with the Python Pandas Complete Guide.

Related Resources

Related Resources

Python Virtual Environments Guide
Isolate dependencies with venv, pip, and virtualenv
Python Data Structures Guide
Lists, dicts, sets, tuples, and when to use each
Python Pandas Complete Guide
Data analysis with DataFrames, groupby, merging, and more
GitHub Actions CI/CD Guide
Automate testing, building, and deploying with workflows
Python Cheat Sheet
Quick reference for Python syntax and built-in methods
JSON Formatter
Format, validate, and beautify JSON test data