GitHub Issue: #1 - Roadmap
This document outlines the development roadmap for the Torrust Tracker Deployer project. Each task is marked with:
[ ]- Not completed[x]- Completed
When starting work on a new feature:
- Create the feature documentation in the
docs/features/folder and commit it - Open an issue on GitHub linking to the feature folder in the repository
- Add the new issue as a child issue of the main EPIC issue
Note: See
docs/features/README.mdfor detailed conventions and process guide for creating new features.
Epic Issue: #2 - Scaffolding for main app
- 1.1 Setup logging - Issue #3 ✅ Completed
- 1.2 Create command
torrust-tracker-deployer destroyto destroy an environment ✅ Completed- Parent EPIC: Implement
destroyCommand - GitHub Issue #8 - Split into two child EPICs for incremental delivery:
- Child EPIC #9: App Layer Destroy Command - Core business logic
- Child EPIC #10: UI Layer Destroy Command - CLI interface
- Parent EPIC: Implement
- 1.3 Refactor extract shared code between testing and production for app bootstrapping ✅ Completed
- 1.4 Improve command to use better abstraction to handle presentation layer ✅ Completed
- User output architecture improvements implemented
- Epic #102 completed
- Message trait system, sink abstraction, and theme support added
- Folder module structure with focused submodules
- 1.5 Create command
torrust-tracker-deployer createto create a new environment ✅ Completed- EPIC: Implement Create Environment Command - GitHub Issue #34
- 1.6 Create command
torrust-tracker-deployer provisionto provision VM infrastructure (UI layer only) ✅ Completed - Issue #174- Note: The App layer ProvisionCommand is already implemented, this task focuses on the console subcommand interface
- Implementation should call the existing ProvisionCommand business logic
- Handle user input, validation, and output presentation
- 1.7 Create command
torrust-tracker-deployer configureto configure provisioned infrastructure (UI layer only) ✅ Completed - Issue #180- Note: The App layer ConfigureCommand is already implemented, this task focuses on the console subcommand interface
- Implementation should call the existing ConfigureCommandHandler business logic
- Handle user input, validation, and output presentation
- Enables transition from "provisioned" to "configured" state via CLI
- 1.8 Create command
torrust-tracker-deployer testto verify deployment infrastructure (UI layer only) ✅ Completed - Issue #188- Note: The App layer TestCommandHandler is already implemented, this task focuses on the console subcommand interface
- Implementation should call the existing TestCommandHandler business logic
- Handle user input, validation, and output presentation
- Enables verification of deployment state via CLI (cloud-init, Docker, Docker Compose)
Note: See docs/research/UX/ for detailed UX research that will be useful to implement the features in this section.
Future Enhancement: The torrust-tracker-deployer deploy porcelain command (intelligent orchestration of plumbing commands) will be implemented after the core plumbing commands are stable. See docs/features/hybrid-command-architecture/ for the complete specification.
Epic Issue: #205 - Add Hetzner Provider Support ✅ Completed
- 2.1 Add Hetzner provider support (Phase 1: Make LXD Explicit) ✅ Completed
- 2.1.1 Add Provider enum and ProviderConfig types - Issue #206 ✅ Completed
- 2.1.2 Update UserInputs to use ProviderConfig - Issue #207 ✅ Completed
- 2.1.3 Update EnvironmentCreationConfig DTO - Issue #208 ✅ Completed
- 2.1.4 Parameterize TofuTemplateRenderer by provider - Issue #212 ✅ Completed
- 2.1.5 Update environment JSON files and E2E tests ✅ Completed (part of #212)
- 2.1.6 Update user documentation - Issue #214 ✅ Completed
- 2.2 Add Hetzner provider support (Phase 2: Add Hetzner) ✅ Completed
- Hetzner OpenTofu templates implemented
- Full deployment workflow tested with Hetzner Cloud
Note: These are internal app layer commands (like ProvisionCommand or ConfigureCommand), not console commands. The approach is to slice by functional services rather than deployment stages - we fully deploy a working stack from the beginning and incrementally add new services.
-
3.1 Finish ConfigureCommand ✅ Completed - Epic #16
-
3.2 Implement ReleaseCommand and RunCommand with vertical slices - Epic #216
Strategy: Build incrementally with working deployments at each step. Each slice adds a new service to the docker-compose stack.
- 3.2.1 Hello World slice (scaffolding) - Issue #217 ✅ Completed
- Create
releaseandruncommands with minimal docker-compose template - Deploy and run a simple hello-world container to validate the full pipeline
- Create
- 3.2.2 Torrust Tracker slice - Issue #220 ✅ Completed
- Replace hello-world with Torrust Tracker service
- Add tracker configuration template (start with hardcoded defaults, then progressively expose configuration options)
- 3.2.3 MySQL slice - Issue #232 ✅ Completed
- Add MySQL service to docker-compose stack
- Allow user to choose between SQLite and MySQL in environment config
- 3.2.4 Prometheus slice - Issue #238 ✅ Completed
- Add Prometheus service for metrics collection
- 3.2.5 Grafana slice - Issue #246 ✅ Completed
- Add Grafana service for metrics visualization
Notes:
- Each slice delivers a working deployment
- Configuration complexity grows incrementally (hardcoded → environment config → full flexibility)
- Detailed implementation tasks will be defined in EPIC issues
- 3.2.1 Hello World slice (scaffolding) - Issue #217 ✅ Completed
- 4.1 Create docker image for the deployer to use it without needing to install the dependencies (OpenTofu, Ansible, etc) - Issue #264 ✅ Completed
- Docker image published to Docker Hub
- CI/CD workflow for automated builds
- Security scanning with Trivy
- 5.1
torrust-tracker-deployer show- Display environment information and current state - Issue #241 ✅ Completed - 5.2
torrust-tracker-deployer test- Run application tests ✅ Completed - 5.3
torrust-tracker-deployer list- List environments or deployments - Issue #260 ✅ Completed
Note: The test console subcommand is already partially implemented. The show command displays stored environment data (read-only, no remote verification). A future status command may be added for service health checks.
- 6.1 Add HTTPS support with Caddy for all HTTP services - Issue #272 ✅ Completed
- Implemented Caddy TLS termination proxy
- Added HTTPS support for HTTP tracker
- Added HTTPS support for tracker API
- Added HTTPS support for Grafana
- Research Complete: Issue #270 - Caddy evaluation successful, production deployment verified
Epic Issue: #309 - Add backup support
- 7.1 Research database backup strategies - Issue #310 ✅ Completed
- Investigated SQLite and MySQL backup approaches
- Recommended maintenance-window hybrid approach (container + crontab)
- Built and tested POC with 58 unit tests
- Documented findings in
docs/research/backup-strategies/
- 7.2 Implement backup support - Issue #315 (see spec)
- Add backup container templates (Dockerfile, backup.sh)
- Add backup service to Docker Compose template
- Extend environment configuration schema with backup settings
- Deploy backup artifacts via Ansible playbooks
- Install crontab for scheduled maintenance-window backups
- Supports: MySQL dumps, SQLite file copy, config archives
- Note: Volume management is out of scope - user provides a mounted location
- 8.1 Add levels of verbosity as described in the UX research
- Implement
-v,-vv,-vvvflags for user-facing output - See
docs/research/UX/for detailed UX research
- Implement
Add new commands to allow users to take advantage of the deployer even if they do not want to use all functionalities. This enables partial adoption of the tool.
These commands complete a trilogy of "lightweight" entry points:
register- For users with pre-provisioned instancesvalidate- For users who only want to validate a deployment configurationrender- For users who only want to build artifacts and handle deployment manually
This makes the deployer more versatile for different scenarios and more AI-agent friendly (dry-run commands provide feedback without side effects).
- 9.1 Implement
validatecommand- Validate deployment configuration without executing any deployment steps
- See feature specification:
docs/features/config-validation-command/
- 9.2 Implement artifact generation command
- Command name TBD - candidates:
render,generate,export,prepare,scaffold - Generate all build artifacts (docker-compose, tracker config, Ansible playbooks, etc.) to a
build/directory - Users can copy these files to their own servers and deploy manually
- Target audience: System administrators who only need the final configuration files
- Generate all artifacts and let users decide which ones to use
- TODO: Create feature specification in
docs/features/
- Command name TBD - candidates:
Minor changes to improve the output of some commands and overall user experience.
- 10.1 Add DNS setup reminder in
provisioncommand output- Display reminder when any service has a domain configured
- See draft:
docs/issues/drafts/dns-setup-reminder-in-provision-command.md
- 10.2 Improve
runcommand output with service URLs- Show service URLs immediately after services start
- Include hint about
showcommand for full details - See draft:
docs/issues/drafts/improve-run-command-output-with-service-urls.md
- 10.3 Add DNS resolution check to
testcommand- Verify configured domains resolve to the expected instance IP
- Advisory warning only (doesn't fail tests) - DNS is decoupled from service tests
- See draft:
docs/issues/drafts/add-dns-resolution-check-to-test-command.md
- 10.4 Add
purgecommand to remove local environment data- Removes
data/{env}/andbuild/{env}/for destroyed environments - Allows reusing environment names after destruction
- Users don't need to know internal storage details
- See draft:
docs/issues/drafts/add-purge-command-to-remove-local-data.md
- Removes
Add features and documentation that make the use of AI agents to operate the deployer easier, more efficient, more reliable, and less prone to hallucinations.
Context: We assume users will increasingly interact with the deployer indirectly via AI agents (GitHub Copilot, Cursor, etc.) rather than running commands directly. This section ensures AI agents have the best possible experience when working with the deployer.
-
11.1 Consider using agentskills.io for AI agent capabilities
- Agent Skills is an open format for extending AI agent capabilities with specialized knowledge and workflows
- Developed by Anthropic, adopted by Claude Code, OpenAI Codex, Amp, and others
- Provides progressive disclosure: metadata at startup, instructions on activation, resources on demand
- Skills can bundle scripts, templates, and reference materials
- Evaluate compatibility with current
AGENTS.mdapproach - See issue: #274
- See spec:
docs/issues/274-consider-using-agentskills-io.md
-
11.2 Add AI-discoverable documentation headers to template files
- Templates generate production config files (docker-compose, tracker.toml, Caddyfile, etc.)
- Documentation is moving from templates to Rust wrapper types (published on docs.rs)
- Problem: AI agents in production only see rendered output, not the source repo
- Solution: Add standardized header to templates with links to repo, wrapper path, and docs.rs
- Enables AI agents to find documentation even when working with deployed configs
- See draft:
docs/issues/drafts/add-ai-discoverable-documentation-headers-to-templates.md
-
11.3 Provide configuration examples and questionnaire for AI agent guidance
- Problem: AI agents struggle with the many valid configuration combinations
- Questionnaire template: structured decision tree to gather all required user information
- Example dataset: real-world scenarios mapping requirements to validated configs
- Covers: provider selection, database type, tracker protocols, HTTPS, monitoring, etc.
- Benefits: few-shot learning for agents, reduced hallucination, training/RAG dataset
- Can integrate with
create-environment-configskill from task 11.1 - See draft:
docs/issues/drafts/provide-config-examples-and-questionnaire-for-ai-agents.md
-
11.4 Add dry-run mode for all commands
- Allow AI agents (and users) to preview what will happen before executing operations
- Particularly valuable for destructive commands (
destroy,stop) - Flag:
--dry-runshows planned actions without executing - Reduces risk when AI agents operate autonomously
Features considered valuable but out of scope for v1. We want to release the first version and wait for user acceptance before investing more time. These can be revisited based on user feedback.
| Feature | Rationale | Notes |
|---|---|---|
Machine-readable JSON output (--json flag) |
Easy to add thanks to MVC pattern, but not critical for v1 | Structured output helps AI agents parse results reliably |
| MCP (Model Context Protocol) server | Native AI integration without shell commands | Would let AI agents call deployer as MCP tools directly |
| Structured error format for AI agents | Already improving errors in section 10 | Could formalize with error codes, fix suggestions in JSON |
- This roadmap will be linked to an EPIC issue on GitHub for tracking progress
- Each major feature should have corresponding documentation in
docs/features/before implementation begins