This guide explains our code coverage practices, expectations, and how to work with coverage reports in the Torrust Tracker Deployer project.
Code coverage is a metric that measures which lines of code are executed during tests. It helps us:
- Identify Untested Code: Find areas that lack test coverage
- Maintain Quality: Ensure new features include adequate tests
- Track Progress: Monitor testing improvements over time
- Support Refactoring: Give confidence when changing code
Important: Coverage is a tool, not a goal. High coverage doesn't guarantee bug-free code, but it does indicate that code has been exercised by tests. We use coverage as one of many indicators of code quality.
- Overall Coverage Target: ≥ 70% (lines)
- Critical Business Logic: ≥ 90% (domain layer, commands, steps)
- Shared Utilities: ≥ 95% (clock, username, command executor)
These are targets, not strict requirements. PRs may be merged below these thresholds with proper justification.
The following modules are intentionally excluded from strict coverage requirements:
- Location:
src/bin/,src/main.rs - Reason: These are executables tested through actual execution
- Coverage: Not measured
- Testing: Validated through E2E tests and manual execution
- Location:
src/testing/e2e/tasks/ - Reason: Testing utilities that support E2E tests
- Coverage: Not required
- Testing: Validated through E2E test execution
When mocking adds no value or requires real infrastructure:
src/adapters/lxd/- Requires real LXDsrc/adapters/tofu/- Requires real OpenTofusrc/infrastructure/remote_actions/- Requires real remote infrastructure- Coverage: Tested via E2E tests
- Reason: These interact with external systems that cannot be easily mocked
- Location:
packages/linting/ - Reason: Primarily executed as binary, wraps external tools
- Coverage: 30-40% is acceptable
- Testing: Validated through actual execution
- Reason: Some error variants only occur in real infrastructure failures
- Coverage: Partial coverage is acceptable
- Testing: Critical error paths should be tested; rare edge cases may remain uncovered
Install cargo-llvm-cov:
cargo install cargo-llvm-covValidate that coverage meets the threshold:
cargo cov-checkThis command:
- Runs tests with coverage instrumentation
- Calculates line coverage percentage
- Fails if coverage is below the threshold
- Shows a summary of coverage by file
Example Output (Passing):
Finished test [unoptimized + debuginfo] target(s) in 34.56s
Running unittests src/lib.rs (target/llvm-cov-target/debug/deps/torrust_tracker_deployer_lib-abc123)
...
Filename Regions Missed Regions Cover Functions Missed Functions Executed Lines Missed Lines Cover Branches Missed Branches Cover
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
src/application/commands/... 85.67% ... 87.23% ...
...
TOTAL ... ... 87.23% ... ... ... ... ... 87.23% ... ... ...
Example Output (Failing):
...
TOTAL ... ... 82.45% ... ... ... ... ... 82.45% ... ... ...
error: coverage is below 85%
Useful for integration with coverage tools and IDEs:
cargo cov-lcovOutput: .coverage/lcov.info
Use this format with:
- IDE plugins (VS Code, IntelliJ)
- Coverage visualization tools
- CI/CD integrations
For Codecov service integration:
cargo cov-codecovOutput: .coverage/codecov.json
For human-readable, detailed coverage analysis:
cargo cov-htmlOutput: target/llvm-cov/html/index.html
Open in browser:
open target/llvm-cov/html/index.html # macOS
xdg-open target/llvm-cov/html/index.html # LinuxThe HTML report provides:
- Line-by-line coverage: See exactly which lines are covered
- Function coverage: Identify untested functions
- Branch coverage: Understand conditional logic coverage
- Color coding: Green (covered), red (not covered), yellow (partially covered)
For a quick terminal-based summary:
cargo covThis shows coverage statistics in the terminal without generating files.
All coverage commands use cargo aliases defined in .cargo/config.toml:
| Alias | Full Command | Purpose |
|---|---|---|
cargo cov |
cargo llvm-cov |
Basic coverage report in terminal |
cargo cov-check |
cargo llvm-cov --all-features --workspace --fail-under-lines 70 |
Validate coverage threshold |
cargo cov-lcov |
cargo llvm-cov --lcov --output-path=./.coverage/lcov.info |
Generate LCOV format |
cargo cov-codecov |
cargo llvm-cov --codecov --output-path=./.coverage/codecov.json |
Generate Codecov JSON |
cargo cov-html |
cargo llvm-cov --html |
Generate HTML report |
Code coverage is not checked in pre-commit to keep local development fast and focused on core quality checks.
Coverage is excluded from pre-commit because:
- Speed: Coverage analysis is slow (1-2 minutes) and would slow down local commits
- Reliability: Coverage tools can fail due to missing binaries or tool issues
- Developer Experience: Fast feedback loop for core quality checks (linting, tests)
- CI Enforcement: Coverage threshold is enforced where it matters most - in CI
Developers can still check coverage locally when needed:
# Check current coverage (fast)
cargo cov-check
# Generate detailed HTML report
cargo cov-htmlUse cases for local coverage:
- Before submitting a PR with new features
- Investigating coverage gaps in specific modules
- Understanding which code paths need more testing
Code coverage is automatically generated in GitHub Actions for every push and pull request.
File: .github/workflows/coverage.yml
The workflow generates coverage in multiple formats:
- Text Summary - Terminal output for quick review
- HTML Report - Detailed, browsable coverage report
- Coverage Artifacts - Uploaded for download and review
- Generate text coverage summary (cargo cov)
- Generate HTML coverage report (cargo cov-html)
- Upload HTML report as GitHub Actions artifact- Navigate to your PR or commit
- Click on "Checks" tab
- Select "Coverage Report" workflow
- Scroll to "Artifacts" section
- Download "coverage-html-report"
- Extract and open
index.htmlin browser
The HTML report includes:
- Overall coverage percentages
- Per-file coverage breakdown
- Line-by-line coverage visualization
- Function and branch coverage details
The coverage workflow:
- Does NOT block merges if coverage is low
- Provides visibility into coverage changes
- Helps reviewers assess test quality
- Generates artifacts for detailed analysis
Why? Same reasons as pre-commit: security patches, refactoring, and WIP commits should not be blocked by coverage metrics.
When adding new features, aim for:
- New domain logic: ≥ 90% coverage
- New commands/steps: ≥ 70% coverage
- New utilities: ≥ 95% coverage
- Infrastructure adapters: E2E tests + reasonable unit tests
Note: These are targets, not blockers. PRs may be merged below these thresholds with proper justification.
When fixing bugs:
- Add a test that reproduces the bug
- Verify the test fails before the fix
- Ensure the test passes after the fix
- Maintain or improve existing coverage
This ensures the bug won't regress in the future.
When refactoring code:
- Maintain or improve existing coverage
- Prefer adding tests over decreasing project coverage
- Avoid decreasing overall project coverage below 70%
- Document any intentional coverage reductions
- Update tests to reflect new structure
Documentation-only changes:
- No coverage requirements - tests are not needed
- Coverage is only checked in CI - no local coverage overhead
- Focus on markdown linting and link validation
If your PR reduces coverage:
- Explain why in the PR description
- Justify the change (e.g., "Removed dead code", "Refactored untestable adapter")
- Plan when/how coverage will be restored (if applicable)
- Reviewers will evaluate on a case-by-case basis
Acceptable reasons for coverage drops:
- Removing untested legacy code
- Refactoring to move code to E2E-only adapters
- Adding infrastructure code that requires real systems
- Moving code to excluded modules (binaries, linting package)
Coverage types:
- Line Coverage: Percentage of lines executed
- Function Coverage: Percentage of functions called
- Branch Coverage: Percentage of conditional branches taken
We primarily track line coverage with the 70% target.
Color Coding:
- Green: Line was executed by tests ✅
- Red: Line was never executed ❌
- Yellow: Partial coverage (e.g., one branch of
ifstatement)⚠️
Focus Areas:
- Domain entities/value objects: Should be near 100%
- Commands/Steps: Should be mostly green (70%+)
- Utilities: Should be almost all green (95%+)
- Adapters: May have more red (E2E tested)
If coverage is low:
- Identify which modules have low coverage
- Determine if those modules are excluded (see "What We DON'T Require Coverage For")
- For non-excluded modules, assess:
- Are there missing unit tests?
- Are there untested error paths?
- Are there unused functions that can be removed?
- Prioritize coverage improvements for:
- Business-critical logic
- Complex algorithms
- Error handling paths
Error Handling:
- Error paths are often undertested
- Consider using
Resulttests with bothOkandErrcases - Test error propagation and recovery
Edge Cases:
- Boundary conditions
- Empty collections
- Null/None values
- Maximum/minimum values
Conditional Logic:
- Both branches of
if/else - All cases in
matchstatements - Loop conditions (empty, single item, multiple items)
When reviewing PRs:
- Check coverage change: Did overall coverage increase, decrease, or stay the same?
- Assess new code coverage: Are new features adequately tested?
- Verify test quality: Do tests actually validate behavior, or just exercise code?
- Review excluded modules: Is any code moved to excluded areas justified?
- Evaluate coverage drops: If coverage decreased, is the reason acceptable?
Request additional tests when:
- ✅ New domain logic has <90% coverage
- ✅ New commands/steps have <70% coverage
- ✅ Critical business logic is untested
- ✅ Error paths are completely untested
- ✅ Tests exist but don't validate actual behavior (dummy tests)
Accept lower coverage when:
- ✅ Code is in an excluded module (binaries, E2E infrastructure, adapters)
- ✅ Error conditions require real infrastructure failures
- ✅ Code is being removed/deprecated
- ✅ Refactoring temporarily reduces coverage with a plan to restore it
- ✅ Security patch needs immediate merge
- Download the HTML coverage artifact from GitHub Actions
- Open
index.htmlin a browser - Navigate to changed files
- Verify that:
- New code is covered
- Critical paths are tested
- Error handling is reasonable
- ✅ Run coverage locally before submitting PRs
- ✅ Focus on meaningful tests that validate behavior
- ✅ Test error paths not just happy paths
- ✅ Use coverage to find gaps in test suites
- ✅ Document intentional exclusions in code comments when appropriate
- ✅ Prioritize domain logic coverage over infrastructure code
- ✅ Write tests that will catch bugs, not just increase percentages
- ❌ Don't write tests just for coverage without validating behavior
- ❌ Don't obsess over 100% coverage - it's not realistic or valuable
- ❌ Don't delay security patches for coverage
- ❌ Don't block refactoring due to temporary coverage drops
- ❌ Don't test implementation details - test behavior
- ❌ Don't ignore coverage warnings - investigate before dismissing
- ❌ Don't remove tests to avoid fixing them - fix or document why
Problem: cargo cov-check reports coverage below 70%
Solutions:
- Run
cargo cov-htmlto see detailed report - Identify which modules have low coverage
- Check if they're in excluded categories
- Add tests for critical uncovered code
- If justified, proceed with PR and explain in description
Problem: Coverage seems incorrect for tested code
Possible Causes:
- Test is not running: Verify test is not
#[ignore]d - Feature flags: Check if code requires
--all-features - Conditional compilation: Code may be platform-specific
- Dead code: Code may be unreachable
Solutions:
- Run
cargo testand verify all tests pass - Check
cargo cov-checkuses--all-features - Review conditional compilation attributes
Problem: Coverage workflow fails in GitHub Actions
Common Causes:
- Tests failing: Coverage requires tests to pass
- Missing dependencies:
cargo-llvm-covinstallation failed - Timeout: Tests taking too long
Solutions:
- Check test output in workflow logs
- Verify tests pass locally:
cargo test - Review workflow step outputs
- Testing Conventions - Main testing documentation and principles
- Unit Testing - Unit test naming conventions and patterns
- Testing Commands - Command testing strategies
- Pre-commit Integration - Pre-commit checks and enforcement
- Development Principles - Quality standards and principles
- Error Handling - Error handling patterns and testing
- cargo-llvm-cov Documentation
- Conventional Commits - Commit message format
- LLVM Coverage Mapping Format - Technical details