This document outlines the testing conventions for the Torrust Tracker Deployer project.
Test code should be held to the same quality standards as production code. Tests are not second-class citizens in the codebase.
- Maintainability: Tests should be easy to update when requirements change
- Readability: Tests should be clear and understandable at first glance
- Reliability: Tests should be deterministic and not flaky
- Isolation: Each test should be independent and not affect other tests
- Documentation: Tests serve as living documentation of the system's behavior
Just like production code, tests should follow:
- DRY (Don't Repeat Yourself): Extract common setup logic into helpers and builders
- Single Responsibility: Each test should verify one behavior
- Clear Intent: Test names and structure should make the purpose obvious
- Clean Code: Apply the same refactoring and quality standards as production code
Remember: If the test code is hard to read or maintain, it will become a burden rather than an asset.
All tests should follow the AAA pattern, also known as Given-When-Then:
- Arrange (Given): Set up the test data and preconditions
- Act (When): Execute the behavior being tested
- Assert (Then): Verify the expected outcome
This pattern makes tests:
- Easy to read and understand
- Clear about what is being tested
- Simple to maintain and modify
#[test]
fn it_should_create_ansible_host_with_valid_ipv4() {
// Arrange: Set up test data
let ip = IpAddr::V4(Ipv4Addr::new(192, 168, 1, 1));
// Act: Execute the behavior
let host = AnsibleHost::new(ip);
// Assert: Verify the outcome
assert_eq!(host.as_ip_addr(), &ip);
}- Clarity: Each section has a clear purpose
- Structure: Consistent test organization across the codebase
- Debugging: Easy to identify which phase is failing
- Maintenance: Simple to modify specific parts of the test
When testing the same behavior with different inputs and expected outputs, prefer parameterized tests over loops in the test body.
Why? Parameterized tests provide:
- Better Test Isolation: Each parameter combination runs as a separate test case
- Clearer Test Output: Individual test cases show up separately in test results
- Parallel Execution: Test framework can run each case in parallel
- Easier Debugging: When a test fails, you know exactly which parameter combination caused it
- Better IDE Support: Modern IDEs can run individual parameterized test cases
How? Use the rstest crate for parameterized testing.
#[test]
fn it_should_create_state_file_in_environment_specific_subdirectory() {
let test_cases = vec![
("e2e-config", "e2e-config/state.json"),
("e2e-full", "e2e-full/state.json"),
("e2e-provision", "e2e-provision/state.json"),
];
for (env_name, expected_path) in test_cases {
// Test logic here...
// If one case fails, you don't know which one without debugging
}
}Problem: If the second iteration fails, the test output only shows the test name, not which specific case failed.
use rstest::rstest;
#[rstest]
#[case("e2e-config", "e2e-config/state.json")]
#[case("e2e-full", "e2e-full/state.json")]
#[case("e2e-provision", "e2e-provision/state.json")]
fn it_should_create_state_file_in_environment_specific_subdirectory(
#[case] env_name: &str,
#[case] expected_path: &str,
) {
// Test logic here...
// Each case runs as a separate test with clear identification
}Benefits: Test output shows individual cases:
it_should_create_state_file_in_environment_specific_subdirectory::case_1✅it_should_create_state_file_in_environment_specific_subdirectory::case_2✅it_should_create_state_file_in_environment_specific_subdirectory::case_3✅
Use parameterized tests when:
- ✅ Testing the same behavior with multiple input/output combinations
- ✅ Validating edge cases with different values
- ✅ Testing configuration variations
- ✅ Verifying data transformation with various inputs
Don't use parameterized tests when:
- ❌ Each case tests fundamentally different behavior (use separate tests)
- ❌ The test logic differs significantly between cases
- ❌ You only have one or two cases (just write separate tests)
Add rstest to your Cargo.toml:
[dev-dependencies]
rstest = "0.23"Then import it in your test module:
#[cfg(test)]
mod tests {
use rstest::rstest;
// ... other imports
}Principle: Test output should be clean and focused on test results, not cluttered with user-facing messages.
User-facing progress messages (emojis, status indicators, formatting) should never appear in test output as they:
- Make test output noisy and difficult to read
- Obscure actual test failures and important information
- Create inconsistent output between test runs
- Interfere with CI/CD log parsing and analysis
The project enforces clean test output through:
- Silent Verbosity by Default:
TestContextusesVerbosityLevel::Silentto suppress all user-facing messages - Test-Specific Output Utilities: Use
TestUserOutput::wrapped_silent()for clean test output - No User Messages in Tests: Tests should focus on verifying behavior, not producing user output
// ✅ Good: Uses silent verbosity by default
let context = TestContext::new();
// ✅ Good: Explicit silent output for API tests
let output = TestUserOutput::wrapped_silent();// ❌ Bad: Allows user-facing progress messages
let output = TestUserOutput::wrapped(VerbosityLevel::Normal);
// ❌ Bad: User output appears in test stderr
user_output.progress("⏳ Processing..."); // This will show in test output!When testing user output functionality itself:
#[test]
fn it_should_format_progress_message_correctly() {
// Capture output in test buffers, don't let it reach stderr
let test_output = TestUserOutput::new(VerbosityLevel::Normal);
test_output.output.progress("⏳ Processing...");
// Verify the message format in buffers
let stderr = test_output.stderr();
assert!(stderr.contains("⏳ Processing..."));
}Clean test output ensures:
- Readable Results: Developers can quickly identify failing tests
- Reliable CI/CD: Automated systems can parse test output correctly
- Professional Appearance: Test output looks polished and focused
- Debugging Efficiency: Important error messages aren't buried in noise
Remember: If you see user-facing messages (emojis, progress indicators) in test output, it indicates a testing infrastructure issue that should be fixed.
When writing new tests:
- Always use the
it_should_prefix and describe the specific behavior being validated - Use
MockClockfor any time-dependent tests instead ofUtc::now() - Follow the AAA pattern for clear test structure
- Ensure tests are isolated and don't interfere with each other
- Keep test output clean - Use
TestContext::new()orTestUserOutput::wrapped_silent()to avoid user-facing messages - Use test builders for command testing to simplify setup
- Test commands at multiple levels: unit, integration, and E2E
This section provides links to specialized testing documentation organized by topic:
- Unit Testing - Naming conventions, behavior-driven testing
- Resource Management - TempDir usage, test isolation, cleanup
- Testing Commands - Command test patterns, builders, mocks, E2E
- Clock Service - MockClock usage for deterministic time tests
- Coverage - Code coverage targets, tools, CI/CD workflow, and PR guidelines
- E2E Testing Guide - End-to-end testing setup and usage
- Error Handling - Testing error scenarios
- Module Organization - How to organize test code