10 KiB
Testing Guide
Overview
This document provides a comprehensive guide to testing the Flask application using pytest. The testing infrastructure includes unit tests, integration tests, CI/CD pipelines, and pre-commit hooks.
Table of Contents
- Installation
- Running Tests
- Test Structure
- Writing Tests
- Fixtures
- Coverage
- CI/CD Pipeline
- Pre-commit Hooks
- Best Practices
Installation
Install Test Dependencies
cd backend
pip install -r requirements/base.txt
The base requirements include:
pytest==7.4.3- Testing frameworkpytest-flask==1.3.0- Flask integrationpytest-cov==4.1.0- Coverage reportingpytest-mock==3.12.0- Mocking utilitiesfactory-boy==3.3.0- Test data factoriesfaker==20.1.0- Fake data generation
Running Tests
Run All Tests
cd backend
pytest
Run with Verbose Output
pytest -v
Run with Coverage Report
pytest --cov=app --cov-report=html --cov-report=term
Run Specific Test Files
# Run all model tests
pytest tests/test_models.py
# Run all route tests
pytest tests/test_routes.py
# Run all schema tests
pytest tests/test_schemas.py
Run by Test Name
pytest -k "test_user_creation"
pytest -k "test_login"
Run by Markers
# Run only unit tests
pytest -m unit
# Run only integration tests
pytest -m integration
# Run only authentication tests
pytest -m auth
# Run only product tests
pytest -m product
# Run only order tests
pytest -m order
Run Tests in Parallel (faster)
Install pytest-xdist:
pip install pytest-xdist
pytest -n auto # Use all available CPUs
Test Structure
backend/
├── tests/
│ ├── __init__.py
│ ├── conftest.py # Global fixtures and configuration
│ ├── test_models.py # Model tests
│ ├── test_routes.py # Route/API tests
│ └── test_schemas.py # Pydantic schema tests
├── pytest.ini # Pytest configuration
├── .coveragerc # Coverage configuration
└── app/
├── __init__.py
├── models/ # Database models
├── routes/ # API routes
├── schemas/ # Pydantic schemas
└── ...
Writing Tests
Test File Structure
import pytest
from app.models import User
from app import db
class TestUserModel:
"""Test User model"""
@pytest.mark.unit
def test_user_creation(self, db_session):
"""Test creating a user"""
user = User(
email='test@example.com',
username='testuser'
)
user.set_password('password123')
db_session.add(user)
db_session.commit()
assert user.id is not None
assert user.email == 'test@example.com'
Test API Routes
def test_get_products(client, products):
"""Test getting all products"""
response = client.get('/api/products')
assert response.status_code == 200
data = response.get_json()
assert len(data) == 5
def test_create_product(client, admin_headers):
"""Test creating a product"""
response = client.post('/api/products',
headers=admin_headers,
json={
'name': 'New Product',
'price': 29.99
})
assert response.status_code == 201
data = response.get_json()
assert data['name'] == 'New Product'
Parameterized Tests
@pytest.mark.parametrize("email,password,expected_status", [
("user@example.com", "correct123", 200),
("wrong@email.com", "correct123", 401),
("user@example.com", "wrongpass", 401),
])
def test_login_validation(client, email, password, expected_status):
"""Test login with various inputs"""
response = client.post('/api/auth/login', json={
'email': email,
'password': password
})
assert response.status_code == expected_status
Fixtures
Available Fixtures
Application Fixtures
app: Flask application instance with test configurationclient: Test client for making HTTP requestsrunner: CLI runner for testing Flask CLI commandsdb_session: Database session for database operations
User Fixtures
admin_user: Creates an admin userregular_user: Creates a regular userinactive_user: Creates an inactive user
Product Fixtures
product: Creates a single productproducts: Creates 5 products
Authentication Fixtures
auth_headers: JWT headers for regular useradmin_headers: JWT headers for admin user
Order Fixtures
order: Creates an order with items
Creating Custom Fixtures
# In conftest.py or test file
@pytest.fixture
def custom_product(db_session):
"""Create a custom product"""
product = Product(
name='Custom Product',
price=99.99,
stock=50
)
db_session.add(product)
db_session.commit()
return product
# Use in tests
def test_custom_fixture(custom_product):
assert custom_product.name == 'Custom Product'
Coverage
Coverage Configuration
Coverage is configured in .coveragerc:
[run]
source = app
omit =
*/tests/*
*/migrations/*
*/__pycache__/*
[report]
exclude_lines =
pragma: no cover
def __repr__
raise NotImplementedError
Coverage Thresholds
The CI/CD pipeline enforces 80% minimum code coverage.
Generate Coverage Report
# Terminal report
pytest --cov=app --cov-report=term
# HTML report
pytest --cov=app --cov-report=html
open htmlcov/index.html # Mac
xdg-open htmlcov/index.html # Linux
Coverage Report Example
Name Stmts Miss Cover Missing
----------------------------------------------
app/__init__.py 10 2 80% 15-16
app/models/user.py 45 5 89% 23, 45
app/routes/api.py 120 20 83% 78-85
----------------------------------------------
TOTAL 175 27 85%
CI/CD Pipeline
GitHub Actions Workflow
The backend has automated testing via GitHub Actions:
File: .github/workflows/backend-tests.yml
Pipeline Stages
- Test Matrix: Runs tests on Python 3.10, 3.11, and 3.12
- Services: Sets up PostgreSQL and Redis
- Linting: Runs flake8 for code quality
- Testing: Executes pytest with coverage
- Coverage Upload: Sends coverage to Codecov
- Security Scan: Runs bandit and safety
Triggering the Pipeline
The pipeline runs automatically on:
- Push to
mainordevelopbranches - Pull requests to
mainordevelopbranches - Changes to
backend/**or workflow files
Viewing Results
- Go to the Actions tab in your GitHub repository
- Click on the latest workflow run
- View test results, coverage, and artifacts
Pre-commit Hooks
Setup Pre-commit Hooks
# Install pre-commit
pip install pre-commit
# Install hooks
pre-commit install
# Run hooks manually
pre-commit run --all-files
Available Hooks
The .pre-commit-config.yaml includes:
- Black: Code formatting
- isort: Import sorting
- flake8: Linting
- pytest: Run tests before committing
- mypy: Type checking
- bandit: Security checks
Hook Behavior
Hooks run automatically on:
git commit- Can be skipped with
git commit --no-verify
Best Practices
✅ DO
-
Use descriptive test names
def test_user_creation_with_valid_data(): # Good def test_user(): # Bad -
Test both success and failure cases
def test_login_success(): ... def test_login_invalid_credentials(): ... def test_login_missing_fields(): ... -
Use fixtures for common setup
def test_something(client, admin_user, products): ... -
Mock external services
def test_external_api(mocker): mock_response = {'data': 'mocked'} mocker.patch('requests.get', return_value=mock_response) -
Keep tests independent
- Each test should be able to run alone
- Don't rely on test execution order
-
Use markers appropriately
@pytest.mark.slow def test_expensive_operation(): ...
❌ DON'T
-
Don't share state between tests
# Bad - shared state global_user = User(...) # Good - use fixtures @pytest.fixture def user(): return User(...) -
Don't hardcode sensitive data
# Bad password = 'real_password_123' # Good password = fake.password() -
Don't use production database
- Always use test database (SQLite)
- Fixtures automatically create isolated databases
-
Don't skip error cases
# Bad - only tests success def test_create_product(): ... # Good - tests both def test_create_product_success(): ... def test_create_product_validation_error(): ... -
Don't ignore slow tests in CI
- Mark slow tests with
@pytest.mark.slow - Run them separately if needed
- Mark slow tests with
Test Coverage Requirements
| Module | Line Coverage | Branch Coverage |
|---|---|---|
| routes.py | >90% | >85% |
| models.py | >85% | >80% |
| schemas.py | >90% | >85% |
| services/ | >80% | >75% |
| utils/ | >70% | >65% |
Troubleshooting
Tests Fail with Database Errors
# Clean up test databases
rm -f backend/*.db
Coverage Not Showing
# Install coverage separately
pip install coverage
# Clean previous coverage data
coverage erase
# Run tests again
pytest --cov=app
Import Errors
# Ensure you're in the backend directory
cd backend
# Install in development mode
pip install -e .
Slow Tests
# Run only specific tests
pytest tests/test_routes.py::TestProductRoutes::test_get_products
# Run in parallel
pytest -n auto