Senior AWS Consultant
Have you ever found yourself digging through log files, trying to figure out why your application isn’t working as expected? Or spent hours manually testing a workflow after what seemed like a minor change? If so, you’re not alone. In this article, I’ll share my journey with test-driven development (TDD) and how it transformed my approach to building software.
I used to be skeptical about TDD. The idea of writing tests before writing any actual code seemed counterintuitive to me. How could I test something that didn’t even exist yet? However, working on complex projects changed my perspective entirely.
When navigating unfamiliar code bases, TDD became my compass. Writing tests first forced me to deeply think about what I wanted my code to accomplish. Rather than diving straight into implementation, I had to clearly articulate my expectations – a process that often revealed gaps in my understanding and led to better designs.
The real value of test-driven development
TDD truly shines when you need to modify existing code. Instead of trying to mentally track all the intricate ways your changes might affect the system, you can rely on your test suite to detect any regressions. This safety net allows you to focus on the task at hand without worrying about unforeseen consequences.
As requirements evolve, my tests ensure that existing functionality remains intact while I add new features.
The turning point in my relationship with TDD came when I joined a large project with an unfamiliar code base. Making changes became a high-stakes game, as each modification risked having unintended consequences elsewhere in the system.
Writing tests first forced me to articulate exactly what I wanted to achieve. This clarity proved invaluable. Instead of jumping into the implementation details, I first had to answer some fundamental questions: What should this function return? How should it handle edge cases? What dependencies does it have?
Before writing a single line of implementation code, writing a test forces you to clearly define:
- What inputs the function should accept
- What dependencies need to be mocked (if you’re not familiar with mocking, see the brief overview later in this blog)
- What the expected output structure looks like
Integration testing: Finding the right balance
Integration testing examines how components interact. To be effective, it isn’t about testing everything at once – it’s about testing the right combination of components.
With integration testing, the goal isn’t to test the output of a function; instead, it’s to determine whether a workflow still succeeds when one component is modified.
That’s the component your test should focus on.
First, decide on the scope of your test. Then define:
- What input is needed for that workflow
- What output it should generate
- How many helper functions are called during the workflow
While testing the code that modifies a shopping cart on a website, for example, you don’t need to worry about how the items got into the shopping basket or how payment will work. What matters is whether you can change the quantity or remove items.
In an integration test, you don’t focus on dependencies or third-party tools. Just mock their outputs where necessary. The same applies to your helper functions that aren’t the focus of the test.
TDD is also helpful for existing code bases. Imagine you have a large function that spans hundreds of lines and has what feels like an insurmountable number of dependencies. Rather than jumping straight into adding your new feature, start by writing an integration test to check the current functionality of the code. As you go through the error messages in your test case, mock the dependencies in the code. After you complete your test, you’ll have a template showing what the actual test case for your new feature will look like. Any dependencies not relevant to your test have already been mocked.
Now you can easily write your code and adjust it based on the results of your test. To make your new feature more stable, use your test case to simulate various scenarios and check your code for weaknesses.
For example:
You have a function make_cake() that calls three sub-functions: get_ingredients(), make_batter() and bake(). You want to modify the make_batter() function and see how the changes affect the execution of the entire make_cake() function.
# cake_maker.py
def get_ingredients():
"""Get cake ingredients from some external source"""
# In real life, this might call an API or database
return {"flour": 200, "sugar": 150, "eggs": 2, "butter": 100}
def make_batter(ingredients):
"""Mix ingredients to create a batter"""
# This is the function we want to test for real
if not ingredients:
return None
return {
"mixed": True,
"quality": sum(ingredients.values()) / 10
}
def bake(batter):
"""Bake the batter into a cake"""
# Another external function we'll mock
if not batter:
return {"success": False}
return {"success": True, "taste_score": batter["quality"] * 2}
def make_cake():
"""Main workflow function"""
ingredients = get_ingredients()
batter = make_batter(ingredients)
cake = bake(batter)
return cake
In your integration test, you mock the responses of get_ingredients() and bake(), because those two functions are not the focus of the test. What you want to understand is how changes to make_batter() affect make_cake(), given that the other two functions perform as expected.
# test_cake_maker.py
import pytest
from cake_maker import make_cake
def test_make_cake_real_batter_only(mocker):
"""Test make_cake() but only test the make_batter function for real"""
# Mock the ingredients function
mock_ingredients = {"flour": 200, "sugar": 150, "eggs": 2, "butter": 100}
mock_ingr = mocker.patch('cake_maker.get_ingredients', return_value=mock_ingredients)
# Mock the bake function
mock_cake = {"success": True, "taste_score": 90}
mock_bake = mocker.patch('cake_maker.bake', return_value=mock_cake)
# Call the function - this will use our mocked functions but the real make_batter
result = make_cake()
# Verify result
assert result == mock_cake
# Verify the bake function was called with the correct batter
expected_batter = {
"mixed": True,
"quality": sum(mock_ingredients.values()) / 10
}
actual_batter = mock_bake.call_args[0][0]
assert actual_batter["mixed"] == expected_batter["mixed"]
assert actual_batter["quality"] == expected_batter["quality"]
# Verify our mocks were called
mock_ingr.assert_called_once()
mock_bake.assert_called_once()
Key points
- Mock the functions you’re not testing: We patch get_ingredients() and bake() to isolate make_batter().
- Let the real function run: We don’t patch make_batter() since it’s the function under test.
- Verify inputs and outputs: Check that the mocked functions receive the expected arguments from the real function.
- Keep the test logic simple: Focus on ensuring that the target function integrates properly within the workflow.
This approach allows you to test the behavior of a specific component within a larger system without worrying about external dependencies. This is, of course, a very simple example. In real-world scenarios, you’ll often deal with complex functions in the background that might call external services. You don’t want to call them every time for a variety of reasons, such as speed, authentication, or the desire to keep your test case simple.
Short introduction to mocking
I’ve talked a lot about mocking in this blog. If you are not familiar with it, here’s a brief overview.
Mocks and integration testing for TDD
Mocking is essential for isolating the code under test. In short, it prevents your code from calling an existing subfunction and instead returns a dummy – often with a fixed return value. This is particular helpful, when, for example, dealing with calls to AWS or any other external service that you’re obviously not responsible for maintaining. In such cases, it doesn’t make sense to account for code or tests that you can’t change.
The pytest-mock plugin makes this process more convenient by providing the mocker fixture. Let’s explore some powerful mocking techniques:
Basic mocking
def test_user_service(mocker):
# Mock a database call
mock_db_query = mocker.patch('services.db.query_user', return_value = {"id": 1, "name": "John"})
from services import get_user_details
result = get_user_details(1)
assert result["name"] == "John"
mock_db_query.assert_called_once_with(1)
In this example, we have a function called get_user_details() that we want to test. This function calls another function, query_user(). Since we don’t want to test query_user(), we mock it to prevent an actual database call. By patching a function, we instruct the test suite not to call the existing function, but instead to return a fixed value whenever the patched function is called.
Controlling mock behavior with side_effect
The side_effect parameter offers more dynamic control than return_value:
def test_retry_mechanism(mocker):
# Mock that raises an exception on first call, succeeds on second
mocker.patch('services.external_api.call', side_effect= [ConnectionError("Timeout"), {"data": "success"}])
from services import fetch_with_retry
result = fetch_with_retry("endpoint")
assert result == {"data": "success"}
This is also helpful when you need to test a row of responses that are too complex to be put into a simple list.
Mocking attributes vs return values
Sometimes you need to mock an attribute rather than a return value. In this example, assume that we have a function, initialize(), which instantiates an app object by calling a subfunction, get_config(), which provides certain attributes for our app object. Here, we will mock some of the attributes in our config.
def test_configuration(mocker):
from unittest.mock import MagicMock
config = MagicMock()
config.DEBUG = True
config.API_KEY = "test_key"
mocker.patch('app.get_config', return_value=config)
from app import initialize
app = initialize()
assert app.debug_mode is True
Mocking class methods
When you need to mock a method in a class, use patch.object:
def test_class_method(mocker):
# If you have a class like:
# class MyClass:
# def my_class_func(self):
# return "real result"
from my_module import MyClass
# Mock the class method
mocker.patch.object(MyClass, "my_class_func", return_value="mocked result")
# Now any call to MyClass().my_class_func() will return "mocked result"
instance = MyClass()
assert instance.my_class_func() == "mocked result"
Understanding where to patch
One of the trickiest parts of mocking is determining the correct location to patch. The general rule is to patch the function where it’s imported, not where it’s defined.
Consider this example:
modules/my_func.py:
def meine_func():
# Function definition here
return "real result"
scripts/other_func.py:
from modules.my_func import meine_func
def random_func():
var = 2
return meine_func(var)
If you want to mock meine_func when it’s called from within random_func, you need to patch it at the import location:
def test_meine_func(mocker):
# Patch where the function is imported
mock_func = mocker.patch("scripts.other_func.meine_func", return_value="mocked result")
from scripts.other_func import random_func
result = random_func()
assert result == "mocked result"
mock_func.assert_called_once_with(2)
Inspecting mock calls
To check how your mocks were called, you can inspect the mock_calls attribute:
def test_check_calls(mocker):
mock_service = mocker.patch("my_module.service_client.update")
# Run the function that should call the service
from my_module import update_user
update_user(user_id=123, name="Alice")
# Check that the mock was called correctly
assert mock_service.call_count == 1
mock_service.assert_called_once_with(user_id=123, name="Alice")
# For more detailed inspection
print(mock_service.mock_calls) # Shows all calls with arguments
The mock_calls attribute provides you with a list of all calls made to your mock, including their arguments. This is invaluable for verifying that your code interacts correctly with its dependencies.
Conclusion: The test-driven mindset
Test-driven development is not just a technique – it’s a shift in mindset that changes how you approach software development. By focusing on expected outcomes first, you gain clarity about what you’re building and why.
The initial investment of time spent writing tests pays dividends in the form of faster debugging, more confident refactoring, and a deeper understanding of your system. When combined with strategic mocking and well-scoped integration tests, TDD creates a development workflow that’s not only more reliable, but often more enjoyable as well.
So the next time you’re about to dive into coding, consider taking a step back and asking: “What test would prove that my solution works?” Your future self will thank you when the test suite catches an issue before it ever reaches production.
Case Stories
Three ISO certifications successfully renewed
EU AI Act: Achieve compliance and transparency with AWS
Chatbots and AI hallucinations: Building precision and trust with AWS Bedrock
Berliner Stadtreinigungsbetriebe
Skaylink achieves AWS Generative AI Competency
Why Azure Arc is my go-to for hybrid cloud (and should be yours too)
Acting fast – Thanks to a customized IT infrastructure
Azure Arc – unified visibility across all your environments
Global Exchange Online migration at ZIEHL-ABEGG
Unleashing the power of NetApp ONTAP on AWS with Skaylink
AWS European Sovereign Cloud: Skaylink becomes a launch partner