Testing GitHub APIs with Pytest - Practice

Introduction

In the previous post, we covered the theoretical foundations of Pytest.

Now we will apply everything in practice using a real example that consumes the GitHub public API:

This guide is intentionally very detailed. Every important line will be explained so that even beginners can follow.


Project Structure

TEXT
github_api_tests/
├── app.py                     # Flask web application
├── services/
│   └── github_service.py      # GitHub API service layer
├── tests/
│   ├── conftest.py            # Shared fixtures
│   ├── test_users_unit.py     # Unit tests with mocks
│   ├── test_users_integration.py  # Integration tests (real API)
│   ├── test_parametrize.py    # Parameterized tests
│   ├── test_skip.py           # Skip marker examples
│   ├── test_fail.py           # xfail marker examples
│   ├── test_errors.py         # Error handling tests
│   ├── test_functional.py     # Functional tests (Flask routes)
│   ├── test_performace.py     # Performance benchmarks
│   └── test_regression.py     # Regression tests
├── pytest.ini                 # Pytest configuration
└── requirements.txt           # Project dependencies
Click to expand and view more

We organize the project following best practices:

This architecture provides:


Project Setup

Before running the tests, we need to prepare our local development environment properly.


Creating a Virtual Environment

A virtual environment allows us to isolate project dependencies from the global Python installation.

This prevents version conflicts between different projects and ensures reproducibility.

BASH
python -m venv .venv
Click to expand and view more

What this command does:

python → Runs the Python interpreter.

-m venv → Executes the built-in venv module.

.venv → Creates a new virtual environment folder named .venv.

After running this command, a directory called .venv/ will be created containing:

This ensures that any package installed inside this environment will not affect other projects.


Activating the Virtual Environment

Once created, the virtual environment must be activated:

BASH
source .venv/bin/activate
Click to expand and view more

What this does:

source → Executes the script in the current shell.

.venv/bin/activate → Activates the virtual environment.

After activation:

Your terminal prompt usually changes (e.g., (.venv) appears).

python and pip now point to the virtual environment versions.

All installed packages will be isolated inside .venv.


Dependencies

All project dependencies are listed in a requirements.txt file:

PLAINTEXT
flask
requests
pytest
pytest-mock
pytest-cov
pytest-benchmark
Click to expand and view more

What each dependency does:


Installing Dependencies

To install all required packages:

BASH
pip install -r requirements.txt
Click to expand and view more

What this command does:


Pytest Configuration

The project includes a pytest.ini file to configure pytest behavior globally.

pytest.ini

INI
[pytest]
addopts = -ra -q --cov=app --cov-report=term-missing
markers =
    unit: marks tests as unit tests
    integration: marks tests as integration tests
    slow: marks tests as slow tests
    regression: marks tests as regression tests
Click to expand and view more

Line-by-line explanation

[pytest]

Declares that this file contains configuration for pytest.

addopts = -ra -q --cov=app --cov-report=term-missing

Defines default command-line options that will always be applied when running pytest.

Breaking it down:

This ensures that every test run automatically includes coverage analysis.

markers =

Registers custom test markers to avoid warnings like:

MAKEFILE
PytestUnknownMarkWarning: Unknown pytest.mark.integration
Click to expand and view more

Each marker must be declared explicitly.


unit

INI
unit: marks tests as unit tests
Click to expand and view more

Marks tests as unit tests.

These tests:

You can run them using:

BASH
pytest -m unit
Click to expand and view more

integration

INI
integration: marks tests as integration tests
Click to expand and view more

Marks integration tests.

These tests:

Run only integration tests:

BASH
pytest -m integration
Click to expand and view more

slow

INI
slow: marks tests as slow tests
Click to expand and view more

Marks slower tests.

You can exclude them:

BASH
pytest -m "not slow"
Click to expand and view more

Production Code

Flask Application (app.py)

PYTHON
# app.py

from flask import Flask, jsonify
from services.github_service import fetch_users

app = Flask(__name__)

@app.route('/users')
def github_users():
    try:
        users = fetch_users()
        return jsonify(users)
    except Exception as e:
        return jsonify({"error": str(e)}), 500

if __name__ == '__main__':
    app.run(debug=True)
Click to expand and view more

Service Layer (services/github_service.py)

PYTHON
# services/github_service.py

import requests

GITHUB_API_URL = 'https://api.github.com/users'

def fetch_users(per_page=10):
    """
    Fetches users from the public GitHub API.
    """
    response = requests.get(
        GITHUB_API_URL, params={'per_page': per_page}, timeout=5
    )
    response.raise_for_status()
    return response.json()
Click to expand and view more

Production Code (Line-by-Line Explanation)

Flask Application (app.py)

PYTHON
# app.py

from flask import Flask, jsonify
from services.github_service import fetch_users
Click to expand and view more

We import:


PYTHON
app = Flask(__name__)
Click to expand and view more

Creates the Flask application instance.


PYTHON
@app.route('/users')
def github_users():
    try:
        users = fetch_users()
        return jsonify(users)
    except Exception as e:
        return jsonify({"error": str(e)}), 500
Click to expand and view more

This is a Flask route with error handling:


PYTHON
if __name__ == '__main__':
    app.run(debug=True)
Click to expand and view more

Allows running the app directly with python app.py.

Service Layer (services/github_service.py)

PYTHON
# services/github_service.py

import requests

GITHUB_API_URL = 'https://api.github.com/users'
Click to expand and view more

PYTHON
def fetch_users(per_page=10):
    """
    Fetches users from the public GitHub API.
    """
    Fixture que cria um cliente HTTP de teste para o Flask.
    """
    with app.test_client() as client:
        yield client

@pytest.fixture
def sample_username():
    return 'octocat'
Click to expand and view more

This fixture provides reusable test data.

Fixtures (Explanation)

PYTHON
# tests/conftest.py

import pytest
from app import app
Click to expand and view more

We import pytest and the Flask app.


Flask Test Client Fixture

PYTHON
@pytest.fixture
def client():
    """
    Fixture that creates HTTP test client for Flask.
    """
    with app.test_client() as client:
        yield client
Click to expand and view more

Line-by-line breakdown:

This fixture allows testing Flask routes without starting a real server.


Sample Data Fixture

PYTHON
@pytest.fixture
def sample_username():
    return 'octocat'
Click to expand and view more

Line-by-line:

When tests include these parameters:

PYTHON
def test_example(client, sample_username):
Click to expand and view more

Pytest automatically injects both fixtures. This is dependency injection.


Unit Test with Mock

PYTHON
# tests/test_users_unit.py

from services.github_service import fetch_users

def test_fetch_users_with_pytest_mock(mocker):
    """
    Same unit test, using pytest-mock.
    """
    # Arrange
    fake_users = [{'login': 'pytest-mock'}]

    mocker.patch(
        'services.github_service.requests.get',
        return_value=mocker.Mock(
            json=lambda: fake_users, raise_for_status=lambda: None
        ),
    )

    # Act
    users = fetch_users()

    # Assert
    assert users == fake_users
Click to expand and view more

Unit Test (Line-by-Line Explanation)

PYTHON
# tests/test_users_unit.py

from services.github_service import fetch_users
Click to expand and view more

We import the service function to test it in isolation.


PYTHON
def test_fetch_users_with_pytest_mock(mocker):
Click to expand and view more

Arrange Section

PYTHON
fake_users = [{'login': 'pytest-mock'}]
Click to expand and view more

Creates expected test data.


PYTHON
mocker.patch(
    'services.github_service.requests.get',
    return_value=mocker.Mock(
        json=lambda: fake_users, 
        raise_for_status=lambda: None
    ),
)
Click to expand and view more

This is the mocking setup:

Key point: No real HTTP call is made!


Act Section

PYTHON
users = fetch_users()
Click to expand and view more

Calls the service function, which uses the mocked requests.get.


Assert Section

PYTHON
assert users == fake_users
Click to expand and view more

Verifies the service returns exactly what our mock provided.


Integration Test (Real API Call)

PYTHON
# tests/test_users_integration.py

import pytest
from services.github_service import fetch_users

@pytest.mark.integration
@pytest.mark.skip(reason="GitHub API rate limit exceeded - skip for now")
def test_fetch_users_integration():
    """
    Calls REAL GitHub API
    """
    # Act
    users = fetch_users(5)

    # Assert
    assert isinstance(users, list)
    assert len(users) > 0
    assert 'login' in users[0]
Click to expand and view more

This test makes a real HTTP call to GitHub.

Integration Test (Line-by-Line Explanation)

PYTHON
@pytest.mark.integration
def test_fetch_users_integration():
Click to expand and view more

PYTHON
users = fetch_users(5)
Click to expand and view more

This performs a real request to GitHub API, asking for 5 users.


PYTHON
assert isinstance(users, list)
assert len(users) > 0
assert 'login' in users[0]
Click to expand and view more

Multiple assertions validate:

Important: This test requires internet connection and can be slow.


Parametrize

PYTHON
# tests/test_parametrize.py

import pytest
from services.github_service import fetch_users

@pytest.mark.parametrize('qty', [1, 3, 5])
def test_fetch_users_parametrize(qty, mocker):
    """
    Same test with multiple values.
    Using mocked requests to avoid rate limiting.
    """
    # Arrange - mock response to avoid rate limiting
    fake_users = [{'login': f'user{i}'} for i in range(qty)]
    
    mocker.patch(
        'services.github_service.requests.get',
        return_value=mocker.Mock(
            json=lambda: fake_users, 
            raise_for_status=lambda: None
        ),
    )
    
    # Act
    users = fetch_users(qty)
    
    # Assert
    assert len(users) <= qty
    assert len(users) == qty  # Should match exactly with our mock
Click to expand and view more

Pytest runs this test three times with different quantities.

Parametrize (Detailed)

PYTHON
@pytest.mark.parametrize('qty', [1, 3, 5])
Click to expand and view more

Pytest will:


PYTHON
def test_fetch_users_parametrize(qty, mocker):
    fake_users = [{'login': f'user{i}'} for i in range(qty)]
    mocker.patch('services.github_service.requests.get', ...)
    users = fetch_users(qty)
    assert len(users) == qty
Click to expand and view more

The parameters are automatically injected into the test function.

This approach:

Pro tip: You can also combine multiple parameters:

PYTHON
@pytest.mark.parametrize('qty,expected', [(1, 1), (5, 5), (100, 100)])
Click to expand and view more

Testing Errors

PYTHON
# tests/test_errors.py

import pytest
import requests
from services.github_service import fetch_users

def test_timeout_error(mocker):
    """
    Tests how service handles timeouts.
    """
    # Arrange
    mocker.patch(
        'services.github_service.requests.get',
        side_effect=requests.Timeout("Request timed out")
    )

    # Act & Assert
    with pytest.raises(requests.Timeout):
        fetch_users()

def test_http_error_handling(mocker):
    """
    Tests HTTP error handling.
    """
    # Arrange
    mocker.patch(
        'services.github_service.requests.get',
        side_effect=requests.HTTPError("404 Not Found")
    )

    # Act & Assert
    with pytest.raises(requests.HTTPError):
        fetch_users()
Click to expand and view more

This ensures errors are handled correctly.

Testing Errors (Line-by-Line Explanation)

Timeout Error Test

PYTHON
mocker.patch(
    'services.github_service.requests.get',
    side_effect=requests.Timeout("Request timed out")
)
Click to expand and view more

PYTHON
with pytest.raises(requests.Timeout):
    fetch_users()
Click to expand and view more

HTTP Error Test

PYTHON
side_effect=requests.HTTPError("404 Not Found")
Click to expand and view more

Simulates HTTP errors like 404, 500, etc.

Key benefits:


Skip vs Xfail

PYTHON
# tests/test_skip.py

import pytest

@pytest.mark.skip(reason='Feature under development')
def test_future_feature():
    assert True
Click to expand and view more
PYTHON
# tests/test_fail.py

import pytest
from services.github_service import fetch_users

@pytest.mark.xfail(reason='Known bug')
def test_known_bug():
    assert False

@pytest.mark.xfail(reason='Known bug when per_page > 100')
def test_expected_failure():
    fetch_users(200)
Click to expand and view more

Skip vs Xfail (Detailed)

Skip Marker

PYTHON
@pytest.mark.skip(reason='Feature under development')
Click to expand and view more

Xfail Marker

PYTHON
@pytest.mark.xfail(reason='Known bug')
Click to expand and view more

When to use each:

Conditional skipping:

PYTHON
@pytest.mark.skipif(sys.version_info < (3, 8), reason="Requires Python 3.8+")
def test_python_38_feature():
    pass
Click to expand and view more

Functional Tests (Flask Routes)

PYTHON
# tests/test_functional.py

from http import HTTPStatus

def test_users_page(client):
    """
    Simulates access to Flask route
    """
    # Act
    response = client.get('/users')

    # Assert - check for either successful response or rate limit error
    assert response.status_code in [HTTPStatus.OK, HTTPStatus.INTERNAL_SERVER_ERROR]
    
    # Check if response is valid JSON
    try:
        data = json.loads(response.data)
        if response.status_code == HTTPStatus.OK:
            assert isinstance(data, list)
        else:
            # Should be an error response
            assert isinstance(data, dict)
            assert 'error' in data
    except json.JSONDecodeError:
        assert False, "Response is not valid JSON"
Click to expand and view more

Functional Tests (Line-by-Line Explanation)

PYTHON
def test_users_page(client):
Click to expand and view more

PYTHON
response = client.get('/users')
Click to expand and view more

PYTHON
assert response.status_code in [HTTPStatus.OK, HTTPStatus.INTERNAL_SERVER_ERROR]
Click to expand and view more

PYTHON
data = json.loads(response.data)
if response.status_code == HTTPStatus.OK:
    assert isinstance(data, list)
else:
    assert isinstance(data, dict)
    assert 'error' in data
Click to expand and view more

Performance Tests

PYTHON
# tests/test_performace.py

def test_users_endpoint_performance(benchmark, client):
    """
    Measures response time of /users route.
    """
    benchmark(lambda: client.get('/users'))
Click to expand and view more

Performance Testing (Line-by-Line Explanation)

PYTHON
def test_users_endpoint_performance(benchmark, client):
Click to expand and view more

PYTHON
benchmark(lambda: client.get('/users'))
Click to expand and view more

Current benchmark results from the project:

PLAINTEXT
------------------------------------------------------- benchmark: 1 tests ------------------------------------------------------
Name (time in ms)                        Min       Max      Mean  StdDev    Median      IQR  Outliers     OPS  Rounds  Iterations
---------------------------------------------------------------------------------------------------------------------------------
test_users_endpoint_performance     246.0832  264.0037  254.0807  8.1997  252.4519  15.2874       1;0  3.9358       5           1
---------------------------------------------------------------------------------------------------------------------------------

Legend:
  Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
  OPS: Operations Per Second, computed as 1 / Mean
Click to expand and view more

Benchmark metrics explained:

Run with: pytest --benchmark-only

Regression Tests

PYTHON
# tests/test_regression.py

from http import HTTPStatus
import pytest

@pytest.mark.regression
def test_users_endpoint_handles_gracefully(client):
    """
    Test that route handles errors gracefully.
    Even if external API fails, route should not crash.
    """
    # Act
    response = client.get('/users')

    # Should return proper JSON response even when GitHub API fails
    assert response.status_code in [HTTPStatus.OK, HTTPStatus.INTERNAL_SERVER_ERROR]
    
    # Response should always be valid JSON
    import json
    try:
        data = json.loads(response.data)
        assert isinstance(data, (list, dict))
    except json.JSONDecodeError:
        assert False, "Response is not valid JSON"
Click to expand and view more

Regression Testing (Line-by-Line Explanation)

PYTHON
@pytest.mark.regression
Click to expand and view more

PYTHON
assert response.status_code in [HTTPStatus.OK, HTTPStatus.INTERNAL_SERVER_ERROR]
Click to expand and view more

Run regression tests only: pytest -m regression

Coverage

BASH
pytest --cov=app --cov-report=term-missing
Click to expand and view more

This:

Current coverage report from the project:

PLAINTEXT
==================== tests coverage ====================
Name     Stmts   Miss  Cover   Missing
--------------------------------------
app.py      12      2    83%   10, 15
--------------------------------------
TOTAL       12      2    83%
Click to expand and view more

Coverage metrics explained:

⚠️ Important: High coverage does not mean high-quality tests. Focus on testing behavior, not just lines.


Running Different Test Types

BASH
# Run only unit tests
pytest -m unit

# Run only integration tests
pytest -m integration

# Run regression tests
pytest -m regression

# Skip slow tests
pytest -m "not slow"

# Run performance benchmarks
pytest --benchmark-only

# Run with coverage (configured in pytest.ini)
pytest

# Run with verbose output
pytest -v

# Run specific test file
pytest tests/test_users_unit.py
Click to expand and view more

Best Practices Summary

  1. Test Organization

    • Separate production code from tests
    • Use descriptive test names
    • Group related tests in files
  2. AAA Pattern

    • Arrange: Prepare test data and mocks
    • Act: Execute the function under test
    • Assert: Verify the outcome
  3. Mocking Strategy

    • Mock external dependencies in unit tests
    • Use real calls in integration tests
    • Mock only what you need (specific methods)
  4. Fixtures Usage

    • Reuse common test setup
    • Keep fixtures focused and simple
    • Use yield for cleanup operations
  5. Test Categories

    • Unit: Fast, isolated, business logic
    • Integration: Real external calls
    • Functional: Full user scenarios
    • Performance: Benchmark critical paths
    • Regression: Prevent bug recurrence

Advanced Tips

Custom Markers

PYTHON
# pytest.ini
markers =
    unit: marks tests as unit tests
    integration: marks tests as integration tests
    regression: marks tests as regression tests
    slow: marks tests as slow tests
    network: marks tests requiring internet
Click to expand and view more

Test Configuration

PYTHON
# conftest.py
@pytest.fixture(scope="session")
def api_client():
    """Client shared across all tests"""
    return SomeApiClient()

@pytest.fixture(autouse=True)
def setup_test_environment():
    """Auto-used fixture for all tests"""
    # Setup code here
    yield
    # Cleanup code here
Click to expand and view more

Conclusion

You now understand:

This is professional-level test architecture that will scale with your project.

Full project on Github

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut