This directory contains all application-related components for the Torrust Tracker Demo project - everything needed to deploy, configure, and manage the Torrust Tracker application itself.
application/
├── docs/ # Application documentation
│ ├── production-setup.md # Production deployment guide
│ ├── deployment.md # Deployment procedures
│ ├── backups.md # Application backup procedures
│ ├── rollbacks.md # Application rollback procedures
│ ├── useful-commands.md # Common application commands
│ ├── firewall-requirements.md # Network access requirements
│ └── media/ # Application-specific images and diagrams
│ ├── torrust-tracker-grafana-dashboard.png
│ └── do-firewall-configuration.png
├── share/ # Application resources
│ ├── bin/ # Utility scripts
│ │ ├── ssl_renew.sh
│ │ ├── time-running.sh
│ │ ├── tracker-db-backup.sh
│ │ └── tracker-filtered-logs.sh
│ ├── dev/home/ # Development configurations
│ └── grafana/dashboards/ # Grafana dashboard configurations
│ ├── metrics.json
│ ├── stats.json
│ └── README.md
├── compose.yaml # Docker Compose configuration
├── .env.production # Production environment variables
└── README.md # This file
- Service Deployment: Torrust Tracker, Nginx, Prometheus, Grafana
- Application Configuration: Tracker settings, database connections
- Service Orchestration: Docker Compose service management
- Application Data: Database, logs, metrics, dashboards
- Application Security: SSL certificates, service authentication
- Application Monitoring: Metrics collection, alerting, dashboards
- Docker & Docker Compose: Container orchestration
- Torrust Tracker: The main BitTorrent tracker application
- Nginx: Reverse proxy and SSL termination
- Prometheus: Metrics collection and storage
- Grafana: Metrics visualization and dashboards
- MySQL: Default database backend for production
- Certbot: SSL certificate management
The Torrust Tracker Demo uses MySQL as the default database backend for production deployments. This provides:
- Reliability: Production-grade database with ACID compliance
- Scalability: Support for high-throughput tracking operations
- Data Integrity: Consistent data storage and retrieval
- Performance: Optimized for concurrent tracker operations
Database Service: The MySQL service is automatically configured with:
- Database initialization scripts
- Proper networking and security
- Data persistence across container restarts
- Health checks and monitoring
For development and testing environments, you can optionally configure SQLite by modifying the tracker configuration, though MySQL is recommended for all production use cases.
# Deploy application services
docker compose -f application/compose.yaml up -d
# Check service status
docker compose -f application/compose.yaml ps
# View logs
docker compose -f application/compose.yaml logs -f# 1. Setup infrastructure (from repository root)
make dev-setup
# Log out and log back in for permissions
# 2. Configure SSH key
make setup-ssh-key
# Edit infrastructure/terraform/local.tfvars with your SSH public key
# 3. Deploy VM and application
make apply # Deploy VM
make ssh # Access VM
docker compose -f application/compose.yaml up -d # Deploy application
make destroy # Clean upOnce deployed, the tracker is available at:
- HTTP Tracker: https://tracker.torrust-demo.com/announce
- UDP Tracker: udp://tracker.torrust-demo.com:6969/announce
For detailed information about all tracker ports and their purposes, see Port Documentation.
The demo includes comprehensive monitoring with Grafana dashboards:
- Torrust Tracker: BitTorrent tracker with HTTP and UDP support
- Web Interface: Management and monitoring interface
- API Endpoints: REST API for tracker management
- Metrics Collection: Prometheus metrics for monitoring
- Visualization: Grafana dashboards for analytics
- Reverse Proxy: Nginx for SSL termination and routing
- SSL Certificates: Automated certificate management
- Log Management: Centralized logging and filtering
- Backup System: Database and configuration backups
- Health Monitoring: Service health checks and alerting
This directory focuses on application concerns. For infrastructure
concerns (VMs, networking, system setup), see the
../infrastructure/ directory.
Application = "What runs and how it's configured"
Infrastructure = "Where and how the application runs"
- Infrastructure: Use
make applyto provision VM - Application: Deploy services with Docker Compose
- Testing: Run integration tests
- Iteration: Make changes and repeat
- Infrastructure: Provision Hetzner servers
- Application: Deploy using production configuration
- Monitoring: Enable metrics and alerting
- Maintenance: Automated backups and updates
- Metrics: Prometheus scrapes application metrics
- Dashboards: Grafana provides visualization
- Logs: Centralized logging with filtering
- Health Checks: Service availability monitoring
- Alerts: Notification system for issues
- SSL/TLS: Automatic certificate management
- Service Isolation: Container-based security
- Access Control: Authentication and authorization
- Data Protection: Encrypted data at rest and in transit
- Production Setup - Production deployment
- Deployment Procedures - Step-by-step deployment
- Backup Procedures - Data backup and recovery
- Rollback Procedures - Application rollback procedures
- Useful Commands - Common operations and commands
- Firewall Requirements - Network access needs
When adding application documentation:
- Application docs: Docker, services, deployment, operations, configuration
- Keep it practical: Focus on deployment, configuration, and operations
- Include examples: Provide working command examples
- Test procedures: Document testing and validation steps
- Cross-reference: Link to related application documentation
Application = "What runs and how it's configured"
Application documentation should cover:
- Docker Compose service configuration
- Application deployment procedures
- Service-level monitoring and logging
- Application backup and recovery
- SSL certificate management
- Application-specific troubleshooting
See ../infrastructure/ for infrastructure-specific documentation.
This demo repository uses Docker containers for all services, including the Torrust Tracker UDP component, even though this may not provide optimal performance for high-throughput UDP tracking operations.
The decision to use Docker for all services, including the performance-critical UDP tracker, prioritizes:
- Simplicity: Single orchestration method (Docker Compose) for all services
- Consistency: Identical deployment process across environments
- Maintainability: Easier updates and dependency management
- Documentation: Clear, reusable examples for users
- Demo Focus: Emphasizes functionality demonstration over peak performance
While Docker networking may introduce some overhead for UDP operations compared to running the tracker binary directly on the host, this trade-off aligns with the repository's primary goals:
- Demo Environment: Showcasing Torrust Tracker functionality
- Frequent Updates: Easy deployment of new tracker versions
- User-Friendly: Simple setup process for evaluation and testing
For production deployments requiring maximum UDP performance, consider:
- Running the tracker binary directly on the host
- Using host networking mode for containers
- Implementing kernel-level network optimizations
- Disabling connection tracking for UDP traffic
These optimizations will be covered in dedicated performance documentation outside this demo repository.
Reference: See ADR-002 for the complete rationale behind this design decision.