Skip to content

Provide Configuration Examples and Questionnaire for AI Agent Guidance #339

@josecelano

Description

@josecelano

Overview

Create structured resources to help AI agents guide users through environment configuration creation. This includes a decision-tree questionnaire template and a curated dataset of example configurations covering common deployment scenarios (minimal development, production with HTTPS, monitoring-enabled, etc.).

AI agents currently have tools (create template, validate, JSON schema, documentation) but lack structured guidance for gathering requirements and mapping them to valid configurations. This leads to trial-and-error interactions and potential configuration errors.

Specification

See detailed specification: docs/issues/339-provide-config-examples-and-questionnaire-for-ai-agents.md

Implementation Plan

Phase 1: Add Description Field to Schema (1 hour)

  • Add optional description field to schemas/environment-config.json
  • Update Rust DTO in src/application/command_handlers/create/config/dto.rs
  • Field type: Option<String>, free-text, no length constraints at schema level
  • Update validation tests to accept configs with description field
  • Run tests to ensure backward compatibility (existing configs without description still work)

Phase 2: Questionnaire Template (1 hour)

  • Create docs/ai-training/questionnaire.md with full decision tree
  • Include validation rules and constraints for each question
  • Add conditional logic notes (if X then ask Y)

Phase 3: Core Example Configurations (2-3 hours)

  • Create 6 core scenario JSON files with description field
  • Scenarios: 01-minimal LXD, 02-full-stack LXD (staging), 03-minimal Hetzner, 04-full-stack Hetzner (production), 05-MySQL development, 09-monitoring stack
  • Each description includes use case + key decisions (2-3 sentences)
  • Validate each config with cargo run -- validate --env-file
  • Use fixture keys only (no real credentials)

Phase 4: Extended Example Configurations (2-3 hours)

  • Add 9 more scenario JSON files covering specific use cases
  • Cover: 06-production HTTPS (staging), 07-UDP-only, 08-HTTP-only HTTPS (staging), 10-multi-domain (staging), 11-private tracker, 12-high-availability (staging), 13-backup-focused (staging), 14-lightweight production (staging), 15-sqlite-monitoring
  • Validate all configs
  • Document common mistakes in README

Phase 5: Documentation and Index (1 hour)

  • Create docs/ai-training/README.md with overview and scenarios table
  • Include usage instructions for AI agents and human users
  • Add table mapping scenario IDs to files
  • Include guidance on when to use each scenario type

Phase 6: Integration Test for Examples (30 minutes)

  • Create integration test at tests/validate_examples.rs
  • Test iterates over all JSON files in docs/ai-training/examples/
  • For each example: run validate command and assert success
  • Similar to tests/e2e/validate_command.rs but for multiple files
  • Test ensures examples remain valid as schema evolves
  • Run test as part of CI to catch regressions early

Acceptance Criteria

Quality Checks:

  • Pre-commit checks pass: ./scripts/pre-commit.sh
  • All linters pass (markdown, cspell)
  • All example JSON configurations validated with cargo run -- validate --env-file
  • Integration test passes: cargo test validate_examples (validates all examples automatically)

Task-Specific Criteria:

  • Optional description field added to schema and validated
  • Backward compatibility maintained (configs without description still work)
  • Questionnaire template created at docs/ai-training/questionnaire.md
  • All 15 example JSON configurations created with description field
  • Each example validated with cargo run -- validate --env-file
  • All descriptions are 2-3 sentences covering use case + key decisions
  • Example configurations use fixture keys only (e.g., fixtures/testing_rsa)
  • README created with scenarios table, usage instructions, and guidance
  • Directory structure matches specification (JSON files only, no markdown per scenario)
  • Full-stack scenarios (02 and 04) include all features: MySQL, Prometheus, Grafana, backup, domains
  • Scenarios 01-02 (LXD minimal and full-stack) and 03-04 (Hetzner minimal and full-stack) demonstrate the complete spectrum
  • All scenarios except 03-04 use LXD for local testing consistency
  • All LXD scenarios with HTTPS (02, 06, 08, 10, 12, 13, 14) use staging certificates (use_staging: true)
  • Hetzner production scenario (04) uses production certificates (no use_staging or use_staging: false)
  • Scenario 13 demonstrates backup without monitoring overhead
  • Scenario 14 demonstrates lightweight production (SQLite + HTTPS + backup)
  • Scenario 15 demonstrates monitoring stack with SQLite simplicity
  • Integration test tests/validate_examples.rs created and passing
  • All examples validated automatically by integration test (prevents regressions)

Related

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions