491 lines
No EOL
10 KiB
Markdown
491 lines
No EOL
10 KiB
Markdown
# Testing Guide
|
|
|
|
## Overview
|
|
|
|
This document provides a comprehensive guide to testing the Flask application using pytest. The testing infrastructure includes unit tests, integration tests, CI/CD pipelines, and pre-commit hooks.
|
|
|
|
## Table of Contents
|
|
|
|
1. [Installation](#installation)
|
|
2. [Running Tests](#running-tests)
|
|
3. [Test Structure](#test-structure)
|
|
4. [Writing Tests](#writing-tests)
|
|
5. [Fixtures](#fixtures)
|
|
6. [Coverage](#coverage)
|
|
7. [CI/CD Pipeline](#cicd-pipeline)
|
|
8. [Pre-commit Hooks](#pre-commit-hooks)
|
|
9. [Best Practices](#best-practices)
|
|
|
|
## Installation
|
|
|
|
### Install Test Dependencies
|
|
|
|
```bash
|
|
cd backend
|
|
pip install -r requirements/base.txt
|
|
```
|
|
|
|
The base requirements include:
|
|
- `pytest==7.4.3` - Testing framework
|
|
- `pytest-flask==1.3.0` - Flask integration
|
|
- `pytest-cov==4.1.0` - Coverage reporting
|
|
- `pytest-mock==3.12.0` - Mocking utilities
|
|
- `factory-boy==3.3.0` - Test data factories
|
|
- `faker==20.1.0` - Fake data generation
|
|
|
|
## Running Tests
|
|
|
|
### Run All Tests
|
|
|
|
```bash
|
|
cd backend
|
|
pytest
|
|
```
|
|
|
|
### Run with Verbose Output
|
|
|
|
```bash
|
|
pytest -v
|
|
```
|
|
|
|
### Run with Coverage Report
|
|
|
|
```bash
|
|
pytest --cov=app --cov-report=html --cov-report=term
|
|
```
|
|
|
|
### Run Specific Test Files
|
|
|
|
```bash
|
|
# Run all model tests
|
|
pytest tests/test_models.py
|
|
|
|
# Run all route tests
|
|
pytest tests/test_routes.py
|
|
|
|
# Run all schema tests
|
|
pytest tests/test_schemas.py
|
|
```
|
|
|
|
### Run by Test Name
|
|
|
|
```bash
|
|
pytest -k "test_user_creation"
|
|
pytest -k "test_login"
|
|
```
|
|
|
|
### Run by Markers
|
|
|
|
```bash
|
|
# Run only unit tests
|
|
pytest -m unit
|
|
|
|
# Run only integration tests
|
|
pytest -m integration
|
|
|
|
# Run only authentication tests
|
|
pytest -m auth
|
|
|
|
# Run only product tests
|
|
pytest -m product
|
|
|
|
# Run only order tests
|
|
pytest -m order
|
|
```
|
|
|
|
### Run Tests in Parallel (faster)
|
|
|
|
Install pytest-xdist:
|
|
```bash
|
|
pip install pytest-xdist
|
|
pytest -n auto # Use all available CPUs
|
|
```
|
|
|
|
## Test Structure
|
|
|
|
```
|
|
backend/
|
|
├── tests/
|
|
│ ├── __init__.py
|
|
│ ├── conftest.py # Global fixtures and configuration
|
|
│ ├── test_models.py # Model tests
|
|
│ ├── test_routes.py # Route/API tests
|
|
│ └── test_schemas.py # Pydantic schema tests
|
|
├── pytest.ini # Pytest configuration
|
|
├── .coveragerc # Coverage configuration
|
|
└── app/
|
|
├── __init__.py
|
|
├── models/ # Database models
|
|
├── routes/ # API routes
|
|
├── schemas/ # Pydantic schemas
|
|
└── ...
|
|
```
|
|
|
|
## Writing Tests
|
|
|
|
### Test File Structure
|
|
|
|
```python
|
|
import pytest
|
|
from app.models import User
|
|
from app import db
|
|
|
|
class TestUserModel:
|
|
"""Test User model"""
|
|
|
|
@pytest.mark.unit
|
|
def test_user_creation(self, db_session):
|
|
"""Test creating a user"""
|
|
user = User(
|
|
email='test@example.com',
|
|
username='testuser'
|
|
)
|
|
user.set_password('password123')
|
|
db_session.add(user)
|
|
db_session.commit()
|
|
|
|
assert user.id is not None
|
|
assert user.email == 'test@example.com'
|
|
```
|
|
|
|
### Test API Routes
|
|
|
|
```python
|
|
def test_get_products(client, products):
|
|
"""Test getting all products"""
|
|
response = client.get('/api/products')
|
|
|
|
assert response.status_code == 200
|
|
data = response.get_json()
|
|
assert len(data) == 5
|
|
|
|
def test_create_product(client, admin_headers):
|
|
"""Test creating a product"""
|
|
response = client.post('/api/products',
|
|
headers=admin_headers,
|
|
json={
|
|
'name': 'New Product',
|
|
'price': 29.99
|
|
})
|
|
|
|
assert response.status_code == 201
|
|
data = response.get_json()
|
|
assert data['name'] == 'New Product'
|
|
```
|
|
|
|
### Parameterized Tests
|
|
|
|
```python
|
|
@pytest.mark.parametrize("email,password,expected_status", [
|
|
("user@example.com", "correct123", 200),
|
|
("wrong@email.com", "correct123", 401),
|
|
("user@example.com", "wrongpass", 401),
|
|
])
|
|
def test_login_validation(client, email, password, expected_status):
|
|
"""Test login with various inputs"""
|
|
response = client.post('/api/auth/login', json={
|
|
'email': email,
|
|
'password': password
|
|
})
|
|
assert response.status_code == expected_status
|
|
```
|
|
|
|
## Fixtures
|
|
|
|
### Available Fixtures
|
|
|
|
#### Application Fixtures
|
|
|
|
- **`app`**: Flask application instance with test configuration
|
|
- **`client`**: Test client for making HTTP requests
|
|
- **`runner`**: CLI runner for testing Flask CLI commands
|
|
- **`db_session`**: Database session for database operations
|
|
|
|
#### User Fixtures
|
|
|
|
- **`admin_user`**: Creates an admin user
|
|
- **`regular_user`**: Creates a regular user
|
|
- **`inactive_user`**: Creates an inactive user
|
|
|
|
#### Product Fixtures
|
|
|
|
- **`product`**: Creates a single product
|
|
- **`products`**: Creates 5 products
|
|
|
|
#### Authentication Fixtures
|
|
|
|
- **`auth_headers`**: JWT headers for regular user
|
|
- **`admin_headers`**: JWT headers for admin user
|
|
|
|
#### Order Fixtures
|
|
|
|
- **`order`**: Creates an order with items
|
|
|
|
### Creating Custom Fixtures
|
|
|
|
```python
|
|
# In conftest.py or test file
|
|
@pytest.fixture
|
|
def custom_product(db_session):
|
|
"""Create a custom product"""
|
|
product = Product(
|
|
name='Custom Product',
|
|
price=99.99,
|
|
stock=50
|
|
)
|
|
db_session.add(product)
|
|
db_session.commit()
|
|
return product
|
|
|
|
# Use in tests
|
|
def test_custom_fixture(custom_product):
|
|
assert custom_product.name == 'Custom Product'
|
|
```
|
|
|
|
## Coverage
|
|
|
|
### Coverage Configuration
|
|
|
|
Coverage is configured in `.coveragerc`:
|
|
|
|
```ini
|
|
[run]
|
|
source = app
|
|
omit =
|
|
*/tests/*
|
|
*/migrations/*
|
|
*/__pycache__/*
|
|
|
|
[report]
|
|
exclude_lines =
|
|
pragma: no cover
|
|
def __repr__
|
|
raise NotImplementedError
|
|
```
|
|
|
|
### Coverage Thresholds
|
|
|
|
The CI/CD pipeline enforces 80% minimum code coverage.
|
|
|
|
### Generate Coverage Report
|
|
|
|
```bash
|
|
# Terminal report
|
|
pytest --cov=app --cov-report=term
|
|
|
|
# HTML report
|
|
pytest --cov=app --cov-report=html
|
|
open htmlcov/index.html # Mac
|
|
xdg-open htmlcov/index.html # Linux
|
|
```
|
|
|
|
### Coverage Report Example
|
|
|
|
```
|
|
Name Stmts Miss Cover Missing
|
|
----------------------------------------------
|
|
app/__init__.py 10 2 80% 15-16
|
|
app/models/user.py 45 5 89% 23, 45
|
|
app/routes/api.py 120 20 83% 78-85
|
|
----------------------------------------------
|
|
TOTAL 175 27 85%
|
|
```
|
|
|
|
## CI/CD Pipeline
|
|
|
|
### GitHub Actions Workflow
|
|
|
|
The backend has automated testing via GitHub Actions:
|
|
|
|
**File**: `.github/workflows/backend-tests.yml`
|
|
|
|
### Pipeline Stages
|
|
|
|
1. **Test Matrix**: Runs tests on Python 3.10, 3.11, and 3.12
|
|
2. **Services**: Sets up PostgreSQL and Redis
|
|
3. **Linting**: Runs flake8 for code quality
|
|
4. **Testing**: Executes pytest with coverage
|
|
5. **Coverage Upload**: Sends coverage to Codecov
|
|
6. **Security Scan**: Runs bandit and safety
|
|
|
|
### Triggering the Pipeline
|
|
|
|
The pipeline runs automatically on:
|
|
- Push to `main` or `develop` branches
|
|
- Pull requests to `main` or `develop` branches
|
|
- Changes to `backend/**` or workflow files
|
|
|
|
### Viewing Results
|
|
|
|
1. Go to the Actions tab in your GitHub repository
|
|
2. Click on the latest workflow run
|
|
3. View test results, coverage, and artifacts
|
|
|
|
## Pre-commit Hooks
|
|
|
|
### Setup Pre-commit Hooks
|
|
|
|
```bash
|
|
# Install pre-commit
|
|
pip install pre-commit
|
|
|
|
# Install hooks
|
|
pre-commit install
|
|
|
|
# Run hooks manually
|
|
pre-commit run --all-files
|
|
```
|
|
|
|
### Available Hooks
|
|
|
|
The `.pre-commit-config.yaml` includes:
|
|
|
|
1. **Black**: Code formatting
|
|
2. **isort**: Import sorting
|
|
3. **flake8**: Linting
|
|
4. **pytest**: Run tests before committing
|
|
5. **mypy**: Type checking
|
|
6. **bandit**: Security checks
|
|
|
|
### Hook Behavior
|
|
|
|
Hooks run automatically on:
|
|
- `git commit`
|
|
- Can be skipped with `git commit --no-verify`
|
|
|
|
## Best Practices
|
|
|
|
### ✅ DO
|
|
|
|
1. **Use descriptive test names**
|
|
```python
|
|
def test_user_creation_with_valid_data(): # Good
|
|
def test_user(): # Bad
|
|
```
|
|
|
|
2. **Test both success and failure cases**
|
|
```python
|
|
def test_login_success(): ...
|
|
def test_login_invalid_credentials(): ...
|
|
def test_login_missing_fields(): ...
|
|
```
|
|
|
|
3. **Use fixtures for common setup**
|
|
```python
|
|
def test_something(client, admin_user, products): ...
|
|
```
|
|
|
|
4. **Mock external services**
|
|
```python
|
|
def test_external_api(mocker):
|
|
mock_response = {'data': 'mocked'}
|
|
mocker.patch('requests.get', return_value=mock_response)
|
|
```
|
|
|
|
5. **Keep tests independent**
|
|
- Each test should be able to run alone
|
|
- Don't rely on test execution order
|
|
|
|
6. **Use markers appropriately**
|
|
```python
|
|
@pytest.mark.slow
|
|
def test_expensive_operation(): ...
|
|
```
|
|
|
|
### ❌ DON'T
|
|
|
|
1. **Don't share state between tests**
|
|
```python
|
|
# Bad - shared state
|
|
global_user = User(...)
|
|
|
|
# Good - use fixtures
|
|
@pytest.fixture
|
|
def user(): return User(...)
|
|
```
|
|
|
|
2. **Don't hardcode sensitive data**
|
|
```python
|
|
# Bad
|
|
password = 'real_password_123'
|
|
|
|
# Good
|
|
password = fake.password()
|
|
```
|
|
|
|
3. **Don't use production database**
|
|
- Always use test database (SQLite)
|
|
- Fixtures automatically create isolated databases
|
|
|
|
4. **Don't skip error cases**
|
|
```python
|
|
# Bad - only tests success
|
|
def test_create_product(): ...
|
|
|
|
# Good - tests both
|
|
def test_create_product_success(): ...
|
|
def test_create_product_validation_error(): ...
|
|
```
|
|
|
|
5. **Don't ignore slow tests in CI**
|
|
- Mark slow tests with `@pytest.mark.slow`
|
|
- Run them separately if needed
|
|
|
|
## Test Coverage Requirements
|
|
|
|
| Module | Line Coverage | Branch Coverage |
|
|
|--------|--------------|-----------------|
|
|
| routes.py | >90% | >85% |
|
|
| models.py | >85% | >80% |
|
|
| schemas.py | >90% | >85% |
|
|
| services/ | >80% | >75% |
|
|
| utils/ | >70% | >65% |
|
|
|
|
## Troubleshooting
|
|
|
|
### Tests Fail with Database Errors
|
|
|
|
```bash
|
|
# Clean up test databases
|
|
rm -f backend/*.db
|
|
```
|
|
|
|
### Coverage Not Showing
|
|
|
|
```bash
|
|
# Install coverage separately
|
|
pip install coverage
|
|
|
|
# Clean previous coverage data
|
|
coverage erase
|
|
|
|
# Run tests again
|
|
pytest --cov=app
|
|
```
|
|
|
|
### Import Errors
|
|
|
|
```bash
|
|
# Ensure you're in the backend directory
|
|
cd backend
|
|
|
|
# Install in development mode
|
|
pip install -e .
|
|
```
|
|
|
|
### Slow Tests
|
|
|
|
```bash
|
|
# Run only specific tests
|
|
pytest tests/test_routes.py::TestProductRoutes::test_get_products
|
|
|
|
# Run in parallel
|
|
pytest -n auto
|
|
```
|
|
|
|
## Additional Resources
|
|
|
|
- [Pytest Documentation](https://docs.pytest.org/)
|
|
- [Pytest-Flask Documentation](https://pytest-flask.readthedocs.io/)
|
|
- [Pydantic Documentation](https://docs.pydantic.dev/)
|
|
- [Flask Testing Documentation](https://flask.palletsprojects.com/en/latest/testing/) |