Move documentation to botbook, update PROMPT to enforce botbook-only docs

- Removed E2E_TESTING_PLAN.md, README_E2E_TESTING.md, TEMP_STACK_SETUP.md from root
- All documentation now in gb/botbook/src/17-testing/
- Updated PROMPT.md v7.0 to forbid root .md files
- PROMPT.md is now the source of truth for testing patterns
- Developers refer to botbook and PROMPT.md, not scattered .md files
This commit is contained in:
Rodrigo Rodriguez (Pragmatismo) 2025-12-06 11:23:57 -03:00
parent eb5c642559
commit 7d5a73e0d2
4 changed files with 37 additions and 1178 deletions

View file

@ -1,244 +0,0 @@
# E2E Testing Plan: Temporary Stack Architecture
## Overview
This document outlines the architecture for comprehensive E2E testing in the General Bots platform using a temporary, isolated stack that can be spawned for testing and automatically cleaned up.
## Problem Statement
Current challenges:
- E2E tests require a pre-configured environment
- Testing can interfere with the main development stack
- No easy way to test the complete flow: platform loading → botserver startup → login → chat → logout
- Integration tests are difficult to automate and reproduce
## Proposed Solution
### 1. Temporary Stack Option in BotServer
Add a new CLI flag `--temp-stack` to BotServer that:
```bash
cargo run -- --temp-stack
# or with custom timeout
cargo run -- --temp-stack --temp-stack-timeout 300
```
**What it does:**
- Creates a temporary directory: `/tmp/botserver-test-{timestamp}-{random}/`
- Sets up all required services (PostgreSQL, MinIO, Redis, etc.) in this directory
- Configures BotServer to use this isolated environment
- Provides environment variables for test harness to connect
- Automatically cleans up on shutdown (SIGTERM/SIGINT)
- Optional timeout that auto-shuts down after N seconds (useful for CI/CD)
### 2. E2E Test Flow
The complete user journey test will validate:
```
1. Platform Loading
└─ Health check endpoint responds
└─ UI assets served correctly
└─ Database migrations completed
2. BotServer Initialization
└─ Service discovery working
└─ Configuration loaded
└─ Dependencies connected
3. Authentication (Login)
└─ Navigate to login page
└─ Enter valid credentials
└─ Session created
└─ Redirected to dashboard
4. Chat Interaction
└─ Open chat window
└─ Send message
└─ Receive AI response
└─ Message history persisted
5. Logout
└─ Click logout button
└─ Session invalidated
└─ Redirected to login
└─ Cannot access protected routes
```
### 3. Test Architecture
#### Test Harness Enhancement
```rust
pub struct TemporaryStack {
pub temp_dir: PathBuf,
pub botserver_process: Child,
pub botserver_url: String,
pub services: ServiceManager,
}
impl TemporaryStack {
pub async fn spawn() -> anyhow::Result<Self>;
pub async fn wait_ready(&self) -> anyhow::Result<()>;
pub async fn shutdown(mut self) -> anyhow::Result<()>;
}
```
#### E2E Test Structure
```rust
#[tokio::test]
async fn test_complete_user_journey() {
// 1. Spawn temporary isolated stack
let stack = TemporaryStack::spawn().await.expect("Failed to spawn stack");
stack.wait_ready().await.expect("Stack failed to become ready");
// 2. Setup browser
let browser = Browser::new(browser_config()).await.expect("Browser failed");
// 3. Test complete flow
test_platform_loading(&browser, &stack).await.expect("Platform load failed");
test_botserver_running(&stack).await.expect("BotServer not running");
test_login_flow(&browser, &stack).await.expect("Login failed");
test_chat_interaction(&browser, &stack).await.expect("Chat failed");
test_logout_flow(&browser, &stack).await.expect("Logout failed");
// 4. Cleanup (automatic on drop)
drop(stack);
}
```
### 4. Implementation Phases
#### Phase 1: BotServer Temp Stack Support
- [ ] Add `--temp-stack` CLI argument
- [ ] Create `TempStackConfig` struct
- [ ] Implement temporary directory setup
- [ ] Update service initialization to support temp paths
- [ ] Add cleanup on shutdown
#### Phase 2: Test Harness Integration
- [ ] Create `TemporaryStack` struct in test framework
- [ ] Implement stack spawning logic
- [ ] Add readiness checks
- [ ] Implement graceful shutdown
#### Phase 3: Complete E2E Test Suite
- [ ] Platform loading test
- [ ] BotServer initialization test
- [ ] Complete login → chat → logout flow
- [ ] Error handling and edge cases
#### Phase 4: CI/CD Integration
- [ ] Docker compose for CI environment
- [ ] GitHub Actions workflow
- [ ] Artifact collection on failure
- [ ] Performance benchmarks
## Technical Details
### Environment Variables
When `--temp-stack` is enabled, BotServer outputs:
```bash
export BOTSERVER_TEMP_STACK_DIR="/tmp/botserver-test-2024-01-15-abc123/"
export BOTSERVER_URL="http://localhost:8000"
export DB_HOST="127.0.0.1"
export DB_PORT="5432"
export DB_NAME="botserver_test_abc123"
export REDIS_URL="redis://127.0.0.1:6379"
export MINIO_URL="http://127.0.0.1:9000"
```
### Cleanup Strategy
- **Graceful**: On SIGTERM/SIGINT, wait for in-flight requests then cleanup
- **Timeout**: Auto-shutdown after `--temp-stack-timeout` seconds
- **Forceful**: If timeout reached, force kill processes and cleanup
- **Persistent on Error**: Keep temp dir if error occurs (for debugging)
### Service Isolation
Each temporary stack includes:
```
/tmp/botserver-test-{id}/
├── postgres/
│ └── data/
├── redis/
│ └── data/
├── minio/
│ └── data/
├── botserver/
│ ├── logs/
│ ├── config/
│ └── cache/
└── state.json
```
## Benefits
1. **Isolation**: Each test gets a completely clean environment
2. **Reproducibility**: Same setup every time
3. **Automation**: Can run in CI/CD without manual setup
4. **Debugging**: Failed tests leave artifacts for investigation
5. **Performance**: Multiple tests can run in parallel with different ports
6. **Safety**: No risk of interfering with development environment
## Limitations
- **LXC Containers**: Cannot test containerization (as mentioned)
- **Network**: Tests run on localhost only
- **Performance**: Startup time ~10-30 seconds per test
- **Parallelization**: Need port management for parallel execution
## Usage Examples
### Run Single E2E Test
```bash
cargo test --test e2e_complete_flow -- --nocapture
```
### Run with Headed Browser (for debugging)
```bash
HEADED=1 cargo test --test e2e_complete_flow
```
### Keep Temp Stack on Failure
```bash
KEEP_TEMP_STACK_ON_ERROR=1 cargo test --test e2e_complete_flow
```
### Run All E2E Tests
```bash
cargo test --lib e2e:: -- --nocapture
```
## Monitoring & Logging
- BotServer logs: `/tmp/botserver-test-{id}/botserver.log`
- Database logs: `/tmp/botserver-test-{id}/postgres.log`
- Test output: stdout/stderr from test harness
- Performance metrics: Collected during each phase
## Success Criteria
✓ Platform fully loads without errors
✓ BotServer starts and services become ready within 30 seconds
✓ User can login with test credentials
✓ Chat messages are sent and responses received
✓ User can logout and session is invalidated
✓ All cleanup happens automatically
✓ Test runs consistently multiple times
✓ CI/CD integration works smoothly
## Next Steps
1. Implement `--temp-stack` flag in BotServer
2. Update config loading to support temp paths
3. Create `TemporaryStack` test utility
4. Write comprehensive E2E test suite
5. Integrate into CI/CD pipeline
6. Document for team

View file

@ -1,10 +1,25 @@
# BotTest Development Prompt Guide
# BotTest Development Prompt
**Version:** 6.1.0
**Version:** 7.0.0
**Purpose:** Test infrastructure for General Bots ecosystem
---
## CRITICAL RULE
🚫 **NO .md FILES IN ROOT OF ANY PROJECT**
All documentation goes in `botbook/src/17-testing/`:
- `README.md` - Testing overview
- `e2e-testing.md` - E2E test guide
- `architecture.md` - Testing architecture
- `performance.md` - Performance testing
- `best-practices.md` - Best practices
This PROMPT.md is the ONLY exception (it's for developers).
---
## Core Principle
**Reuse botserver bootstrap code** - Don't duplicate installation logic. The bootstrap module already knows how to install PostgreSQL, MinIO, Redis. We wrap it with test-specific configuration (custom ports, temp directories).
@ -140,3 +155,23 @@ ctx.cleanup().await; // Explicit cleanup
- Each test gets unique temp directory
- No shared state between tests
- Safe to run with `cargo test -j 8`
---
## Documentation Location
For guides, tutorials, and reference:
→ Use `botbook/src/17-testing/`
Examples:
- E2E testing setup → `botbook/src/17-testing/e2e-testing.md`
- Architecture details → `botbook/src/17-testing/architecture.md`
- Performance tips → `botbook/src/17-testing/performance.md`
Never create .md files at:
- ✗ Root of bottest/
- ✗ Root of botserver/
- ✗ Root of botapp/
- ✗ Any project root
All non-PROMPT.md documentation belongs in botbook.

View file

@ -1,349 +0,0 @@
# E2E Testing for General Bots Platform
## Quick Start
Run the complete platform flow test (loads UI → starts BotServer → login → chat → logout):
```bash
cd gb/bottest
# Without browser (HTTP-only tests)
cargo test --test e2e test_platform_loading_http_only -- --nocapture
cargo test --test e2e test_botserver_startup -- --nocapture
# With browser (requires WebDriver)
# 1. Start WebDriver first:
chromedriver --port=4444
# 2. In another terminal:
cargo test --test e2e test_complete_platform_flow_login_chat_logout -- --nocapture
```
## What Gets Tested
The complete platform flow test validates:
1. **Platform Loading**
- UI assets are served
- API endpoints respond
- Database migrations completed
2. **BotServer Initialization**
- Service is running
- Health checks pass
- Configuration loaded
3. **User Authentication**
- Login page loads
- Credentials accepted
- Session created
- Redirected to dashboard/chat
4. **Chat Interaction**
- Chat interface loads
- Messages can be sent
- Bot responses received
- Message history persists
5. **Logout Flow**
- Logout button works
- Session invalidated
- Redirect to login page
- Protected routes blocked
## Test Files
| File | Purpose |
|------|---------|
| `tests/e2e/platform_flow.rs` | ⭐ Complete user journey test |
| `tests/e2e/auth_flow.rs` | Authentication scenarios |
| `tests/e2e/chat.rs` | Chat message flows |
| `tests/e2e/dashboard.rs` | Dashboard functionality |
| `tests/e2e/mod.rs` | Test context and setup |
## Running Specific Tests
```bash
# Platform flow (complete journey)
cargo test --test e2e test_complete_platform_flow_login_chat_logout -- --nocapture
# Platform loading only (no browser needed)
cargo test --test e2e test_platform_loading_http_only -- --nocapture
# BotServer startup
cargo test --test e2e test_botserver_startup -- --nocapture
# Simpler login + chat
cargo test --test e2e test_login_and_chat_flow -- --nocapture
# Platform responsiveness
cargo test --test e2e test_platform_responsiveness -- --nocapture
# All E2E tests
cargo test --test e2e -- --nocapture
```
## Environment Variables
```bash
# Show browser window (for debugging)
HEADED=1 cargo test --test e2e -- --nocapture
# Custom WebDriver URL
WEBDRIVER_URL=http://localhost:4445 cargo test --test e2e -- --nocapture
# Skip E2E tests
SKIP_E2E_TESTS=1 cargo test
# Verbose logging
RUST_LOG=debug cargo test --test e2e -- --nocapture
# Run single-threaded (clearer output)
cargo test --test e2e -- --nocapture --test-threads=1
```
## Prerequisites
### For HTTP-only Tests
- Rust toolchain
- BotServer compiled
### For Browser Tests (Full E2E)
- Chrome/Chromium installed
- WebDriver (chromedriver) running on port 4444
- All HTTP test prerequisites
### Setup WebDriver
**Option 1: Local Installation**
```bash
# Download chromedriver from https://chromedriver.chromium.org/
# Place in PATH, then:
chromedriver --port=4444
```
**Option 2: Docker**
```bash
docker run -d -p 4444:4444 selenium/standalone-chrome
```
**Option 3: Docker Compose**
```bash
# Use provided docker-compose.yml if available
docker-compose up -d webdriver
```
## Architecture: Temporary Stack (Future)
The E2E tests are designed to work with **temporary, isolated stacks**:
```bash
# When implemented, this will spawn a temporary environment:
botserver --temp-stack
# This creates: /tmp/botserver-test-{timestamp}-{random}/
# With: PostgreSQL, Redis, MinIO, Mock LLM, Mock Auth
# Automatic cleanup after tests
```
**Benefits:**
- ✓ Isolation - Each test runs in separate environment
- ✓ Reproducibility - Same setup every time
- ✓ Automation - No manual setup required
- ✓ Cleanup - Automatic resource management
- ✓ Debugging - Optionally preserve stack on failure
See [TEMP_STACK_SETUP.md](TEMP_STACK_SETUP.md) for implementation details.
## Common Issues
### WebDriver Not Available
```bash
# Solution: Start WebDriver first
chromedriver --port=4444
# or
docker run -d -p 4444:4444 selenium/standalone-chrome
```
### Tests Hang or Timeout
```bash
# Run with timeout and single thread
timeout 120s cargo test --test e2e test_name -- --nocapture --test-threads=1
# With verbose logging
RUST_LOG=debug timeout 120s cargo test --test e2e test_name -- --nocapture --test-threads=1
```
### Port Already in Use
```bash
# Kill existing processes
pkill -f chromedriver
pkill -f "botserver"
pkill -f "postgres"
pkill -f "redis-server"
```
### Browser Connection Issues
```bash
# Use different WebDriver port
WEBDRIVER_URL=http://localhost:4445 cargo test --test e2e -- --nocapture
```
## Test Structure
Each test follows this pattern:
```rust
#[tokio::test]
async fn test_example() {
// 1. Setup context
let ctx = E2ETestContext::setup_with_browser().await?;
// 2. Get browser
let browser = ctx.browser.as_ref().unwrap();
// 3. Run test steps
browser.navigate(&ctx.base_url()).await?;
// 4. Verify results
assert!(some_condition);
// 5. Cleanup (automatic)
ctx.close().await;
}
```
## Debugging
### View Test Output
```bash
# Show all output
cargo test --test e2e test_name -- --nocapture
# Show with timestamps
RUST_LOG=debug cargo test --test e2e test_name -- --nocapture
# Save to file
cargo test --test e2e test_name -- --nocapture 2>&1 | tee test_output.log
```
### See Browser in Action
```bash
# Run with visible browser
HEADED=1 cargo test --test e2e test_name -- --nocapture --test-threads=1
# This shows what the test is doing in real-time
```
### Check Server Logs
```bash
# BotServer logs while tests run
tail -f /tmp/bottest-*/botserver.log
# In another terminal:
cargo test --test e2e test_name -- --nocapture
```
## Performance
- **Platform loading test**: ~2-3 seconds
- **BotServer startup test**: ~5-10 seconds
- **Complete flow with browser**: ~30-45 seconds
- **Full E2E test suite**: ~2-3 minutes
## Integration with CI/CD
Example GitHub Actions workflow:
```yaml
name: E2E Tests
on: [push, pull_request]
jobs:
e2e:
runs-on: ubuntu-latest
services:
chromedriver:
image: selenium/standalone-chrome
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
- run: cd gb/bottest && cargo test --test e2e -- --nocapture
```
## What's Tested in Each Scenario
### `test_complete_platform_flow_login_chat_logout`
✓ Platform health check
✓ API responsiveness
✓ Login with credentials
✓ Dashboard/chat visibility
✓ Send message to bot
✓ Receive bot response
✓ Message appears in history
✓ Logout button works
✓ Session invalidated
✓ Protected routes blocked
### `test_platform_loading_http_only`
✓ Platform health endpoint
✓ API endpoints available
✓ No browser required
### `test_botserver_startup`
✓ Server process running
✓ Health checks pass
✓ No browser required
### `test_login_and_chat_flow`
✓ Minimal path through login and chat
✓ Requires browser
## Next Steps
1. **Run a simple test first**:
```bash
cargo test --test e2e test_platform_loading_http_only -- --nocapture
```
2. **Setup WebDriver for browser tests**:
```bash
chromedriver --port=4444
```
3. **Run the complete flow**:
```bash
cargo test --test e2e test_complete_platform_flow_login_chat_logout -- --nocapture
```
4. **Add custom tests** in `tests/e2e/` using the same pattern
5. **Integrate into CI/CD** using the GitHub Actions example above
## Documentation
- [E2E Testing Plan](E2E_TESTING_PLAN.md) - Architecture and design
- [Temporary Stack Setup](TEMP_STACK_SETUP.md) - Advanced: Using isolated test stacks
- [Test Harness](src/harness.rs) - Test utilities and helpers
- [Platform Flow Tests](tests/e2e/platform_flow.rs) - Complete implementation
## Support
For issues or questions:
1. Check the troubleshooting section above
2. Review test output with `--nocapture` flag
3. Run with `RUST_LOG=debug` for detailed logging
4. Check server logs in `/tmp/bottest-*/`
5. Use `HEADED=1` to watch browser in action
## Key Metrics
Running `test_complete_platform_flow_login_chat_logout` provides:
- **Response Times**: Platform, API, and chat latencies
- **Resource Usage**: Memory and CPU during test
- **Error Rates**: Login failures, message timeouts, etc.
- **Session Management**: Login/logout cycle validation
- **Message Flow**: End-to-end chat message delivery
These metrics help identify performance bottlenecks and regressions.

View file

@ -1,583 +0,0 @@
# Temporary Stack Setup Guide
## Overview
The temporary stack feature allows you to spawn an isolated BotServer environment for testing purposes. This guide explains how to implement and use this feature.
## Architecture
### What is a Temporary Stack?
A temporary stack is a self-contained, isolated instance of the General Bots platform that:
- Runs in a dedicated temporary directory
- Uses isolated database, cache, and storage
- Can be spawned and torn down automatically
- Doesn't interfere with your main development environment
- Perfect for E2E testing and integration tests
### File Structure
```
/tmp/botserver-test-{timestamp}-{random}/
├── postgres/
│ ├── data/ # PostgreSQL data directory
│ ├── postgres.log # Database logs
│ └── postgresql.conf # Database config
├── redis/
│ ├── data/ # Redis persistence
│ └── redis.log # Redis logs
├── minio/
│ ├── data/ # S3-compatible storage
│ └── minio.log # MinIO logs
├── botserver/
│ ├── config/ # BotServer configuration
│ ├── logs/
│ │ ├── botserver.log # Main application logs
│ │ ├── api.log # API logs
│ │ └── debug.log # Debug logs
│ ├── cache/ # Local cache directory
│ └── state.json # Stack state and metadata
└── env.stack # Environment variables for this stack
```
## Implementation: BotServer Changes
### 1. Add CLI Arguments
Update `botserver/src/main.rs`:
```rust
use clap::Parser;
use std::path::PathBuf;
use std::time::Duration;
#[derive(Parser, Debug)]
#[command(author, version, about)]
struct Args {
/// Enable temporary stack mode for testing
#[arg(long)]
temp_stack: bool,
/// Custom temporary stack root directory
/// If not provided, uses /tmp/botserver-test-{timestamp}-{random}
#[arg(long)]
stack_root: Option<PathBuf>,
/// Timeout in seconds for temporary stack auto-shutdown
/// Useful for CI/CD pipelines
#[arg(long)]
temp_stack_timeout: Option<u64>,
/// Keep temporary stack directory after shutdown (for debugging)
#[arg(long)]
keep_temp_stack: bool,
// ... existing arguments ...
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let args = Args::parse();
if args.temp_stack {
return run_temp_stack(args).await;
}
// ... normal startup code ...
}
```
### 2. Implement Temporary Stack Manager
Create `botserver/src/temp_stack.rs`:
```rust
use std::fs;
use std::path::{Path, PathBuf};
use std::process::{Command, Child};
use chrono::Local;
use uuid::Uuid;
use anyhow::{anyhow, Context};
use log::{info, debug};
use tokio::time::{sleep, Duration};
pub struct TemporaryStack {
pub root_dir: PathBuf,
pub postgres_dir: PathBuf,
pub redis_dir: PathBuf,
pub minio_dir: PathBuf,
pub botserver_dir: PathBuf,
pub postgres_process: Option<Child>,
pub redis_process: Option<Child>,
pub minio_process: Option<Child>,
pub botserver_process: Option<Child>,
pub keep_on_shutdown: bool,
pub auto_shutdown_duration: Option<Duration>,
}
impl TemporaryStack {
/// Create and initialize a new temporary stack
pub async fn new(
custom_root: Option<PathBuf>,
keep_on_shutdown: bool,
auto_shutdown: Option<u64>,
) -> anyhow::Result<Self> {
// Generate unique directory name
let timestamp = Local::now().format("%Y%m%d-%H%M%S");
let unique_id = Uuid::new_v4().to_string()[..8].to_string();
let dir_name = format!("botserver-test-{}-{}", timestamp, unique_id);
let root_dir = match custom_root {
Some(p) => p.join(&dir_name),
None => std::env::temp_dir().join(dir_name),
};
info!("Creating temporary stack at: {}", root_dir.display());
// Create directory structure
fs::create_dir_all(&root_dir)
.context("Failed to create temp stack root directory")?;
let postgres_dir = root_dir.join("postgres");
let redis_dir = root_dir.join("redis");
let minio_dir = root_dir.join("minio");
let botserver_dir = root_dir.join("botserver");
fs::create_dir_all(&postgres_dir)?;
fs::create_dir_all(&redis_dir)?;
fs::create_dir_all(&minio_dir)?;
fs::create_dir_all(&botserver_dir)?;
let auto_shutdown_duration = auto_shutdown.map(Duration::from_secs);
Ok(Self {
root_dir,
postgres_dir,
redis_dir,
minio_dir,
botserver_dir,
postgres_process: None,
redis_process: None,
minio_process: None,
botserver_process: None,
keep_on_shutdown,
auto_shutdown_duration,
})
}
/// Start all services in the temporary stack
pub async fn start_services(&mut self) -> anyhow::Result<()> {
info!("Starting temporary stack services");
// Start PostgreSQL
self.start_postgres().await?;
sleep(Duration::from_secs(2)).await;
// Start Redis
self.start_redis().await?;
sleep(Duration::from_secs(1)).await;
// Start MinIO
self.start_minio().await?;
sleep(Duration::from_secs(1)).await;
info!("All temporary stack services started");
Ok(())
}
/// Start PostgreSQL
async fn start_postgres(&mut self) -> anyhow::Result<()> {
info!("Starting PostgreSQL");
let data_dir = self.postgres_dir.join("data");
fs::create_dir_all(&data_dir)?;
// Initialize PostgreSQL cluster if needed
let initdb_output = Command::new("initdb")
.arg("-D")
.arg(&data_dir)
.output();
if initdb_output.is_ok() {
debug!("Initialized PostgreSQL cluster");
}
let process = Command::new("postgres")
.arg("-D")
.arg(&data_dir)
.arg("-p")
.arg("5433") // Use different port than default
.spawn()
.context("Failed to start PostgreSQL")?;
self.postgres_process = Some(process);
info!("PostgreSQL started on port 5433");
Ok(())
}
/// Start Redis
async fn start_redis(&mut self) -> anyhow::Result<()> {
info!("Starting Redis");
let data_dir = self.redis_dir.join("data");
fs::create_dir_all(&data_dir)?;
let process = Command::new("redis-server")
.arg("--port")
.arg("6380") // Use different port than default
.arg("--dir")
.arg(&data_dir)
.spawn()
.context("Failed to start Redis")?;
self.redis_process = Some(process);
info!("Redis started on port 6380");
Ok(())
}
/// Start MinIO
async fn start_minio(&mut self) -> anyhow::Result<()> {
info!("Starting MinIO");
let data_dir = self.minio_dir.join("data");
fs::create_dir_all(&data_dir)?;
let process = Command::new("minio")
.arg("server")
.arg(&data_dir)
.arg("--address")
.arg("127.0.0.1:9001") // Use different port than default
.spawn()
.context("Failed to start MinIO")?;
self.minio_process = Some(process);
info!("MinIO started on port 9001");
Ok(())
}
/// Write environment configuration for this stack
pub fn write_env_config(&self) -> anyhow::Result<()> {
let env_content = format!(
r#"# Temporary Stack Configuration
# Generated at: {}
# Stack Identity
BOTSERVER_STACK_ID={}
BOTSERVER_TEMP_STACK_DIR={}
BOTSERVER_KEEP_ON_SHUTDOWN={}
# Database
DATABASE_URL=postgres://botuser:botpass@127.0.0.1:5433/botserver
DB_HOST=127.0.0.1
DB_PORT=5433
DB_NAME=botserver
DB_USER=botuser
DB_PASSWORD=botpass
# Cache
REDIS_URL=redis://127.0.0.1:6380
REDIS_HOST=127.0.0.1
REDIS_PORT=6380
# Storage
MINIO_URL=http://127.0.0.1:9001
MINIO_ACCESS_KEY=minioadmin
MINIO_SECRET_KEY=minioadmin
MINIO_BUCKET=botserver
# API
API_HOST=127.0.0.1
API_PORT=8000
API_URL=http://127.0.0.1:8000
# Logging
LOG_LEVEL=debug
LOG_FILE={}/botserver.log
"#,
chrono::Local::now(),
Uuid::new_v4(),
self.root_dir.display(),
self.keep_on_shutdown,
self.botserver_dir.display(),
);
let env_file = self.root_dir.join("env.stack");
fs::write(&env_file, env_content)
.context("Failed to write environment configuration")?;
info!("Environment configuration written to: {}", env_file.display());
Ok(())
}
/// Wait for all services to be ready
pub async fn wait_ready(&self, timeout: Duration) -> anyhow::Result<()> {
let start = std::time::Instant::now();
// Check PostgreSQL
loop {
if start.elapsed() > timeout {
return Err(anyhow!("Timeout waiting for PostgreSQL"));
}
match Command::new("pg_isready")
.arg("-h")
.arg("127.0.0.1")
.arg("-p")
.arg("5433")
.output()
{
Ok(output) if output.status.success() => break,
_ => sleep(Duration::from_millis(100)).await,
}
}
info!("PostgreSQL is ready");
// Check Redis
loop {
if start.elapsed() > timeout {
return Err(anyhow!("Timeout waiting for Redis"));
}
match Command::new("redis-cli")
.arg("-p")
.arg("6380")
.arg("ping")
.output()
{
Ok(output) if output.status.success() => break,
_ => sleep(Duration::from_millis(100)).await,
}
}
info!("Redis is ready");
Ok(())
}
/// Gracefully shutdown all services
pub async fn shutdown(&mut self) -> anyhow::Result<()> {
info!("Shutting down temporary stack");
// Stop BotServer
if let Some(mut proc) = self.botserver_process.take() {
let _ = proc.kill();
}
// Stop services
if let Some(mut proc) = self.minio_process.take() {
let _ = proc.kill();
}
if let Some(mut proc) = self.redis_process.take() {
let _ = proc.kill();
}
if let Some(mut proc) = self.postgres_process.take() {
let _ = proc.kill();
}
sleep(Duration::from_millis(500)).await;
// Cleanup directory if not keeping
if !self.keep_on_shutdown {
if let Err(e) = fs::remove_dir_all(&self.root_dir) {
log::warn!("Failed to cleanup temp stack directory: {}", e);
} else {
info!("Temporary stack cleaned up: {}", self.root_dir.display());
}
} else {
info!("Keeping temporary stack at: {}", self.root_dir.display());
}
Ok(())
}
}
impl Drop for TemporaryStack {
fn drop(&mut self) {
if let Err(e) = tokio::runtime::Handle::current().block_on(self.shutdown()) {
log::error!("Error during temporary stack cleanup: {}", e);
}
}
}
```
### 3. Integration in Main
Add to `botserver/src/main.rs`:
```rust
mod temp_stack;
use temp_stack::TemporaryStack;
async fn run_temp_stack(args: Args) -> anyhow::Result<()> {
// Setup logging
env_logger::Builder::from_default_env()
.filter_level(log::LevelFilter::Info)
.try_init()?;
info!("Starting BotServer in temporary stack mode");
// Create temporary stack
let mut temp_stack = TemporaryStack::new(
args.stack_root,
args.keep_temp_stack,
args.temp_stack_timeout,
).await?;
// Start services
temp_stack.start_services().await?;
temp_stack.write_env_config()?;
// Wait for services to be ready
temp_stack.wait_ready(Duration::from_secs(30)).await?;
info!("Temporary stack ready!");
info!("Stack directory: {}", temp_stack.root_dir.display());
info!("Environment config: {}/env.stack", temp_stack.root_dir.display());
// Setup auto-shutdown timer if specified
if let Some(timeout) = temp_stack.auto_shutdown_duration {
tokio::spawn(async move {
sleep(timeout).await;
info!("Auto-shutdown timeout reached, shutting down");
std::process::exit(0);
});
}
// Continue with normal BotServer startup using the temp stack config
run_botserver_with_stack(temp_stack).await
}
```
## Using Temporary Stack in Tests
### In Test Harness
```rust
// bottest/src/harness.rs
pub struct TemporaryStackHandle {
stack: TemporaryStack,
}
impl TestHarness {
pub async fn with_temp_stack() -> anyhow::Result<Self> {
let mut stack = TemporaryStack::new(None, false, None).await?;
stack.start_services().await?;
stack.wait_ready(Duration::from_secs(30)).await?;
// Load environment from stack config
let env_file = stack.root_dir.join("env.stack");
load_env_file(&env_file)?;
// Create harness with temp stack
let mut harness = Self::new();
harness.temp_stack = Some(stack);
Ok(harness)
}
}
```
### In E2E Tests
```rust
#[tokio::test]
async fn test_with_temp_stack() {
// Spawn temporary stack
let mut harness = TestHarness::with_temp_stack()
.await
.expect("Failed to create temp stack");
// Tests run in isolation
// Stack automatically cleaned up on drop
}
```
## Running Tests with Temporary Stack
```bash
# Run E2E test with automatic temporary stack
cargo test --test e2e test_complete_platform_flow -- --nocapture
# Keep temporary stack for debugging on failure
KEEP_TEMP_STACK_ON_ERROR=1 cargo test --test e2e -- --nocapture
# Use custom temporary directory
cargo test --test e2e -- --nocapture \
--temp-stack-root /var/tmp/bottest
# Run with browser UI visible
HEADED=1 cargo test --test e2e -- --nocapture
# Run with auto-shutdown after 5 minutes (300 seconds)
cargo test --test e2e -- --nocapture \
--temp-stack-timeout 300
```
## Environment Variables
Control temporary stack behavior with environment variables:
| Variable | Default | Description |
|----------|---------|-------------|
| `SKIP_E2E_TESTS` | unset | Skip E2E tests if set |
| `HEADED` | unset | Show browser UI instead of headless |
| `KEEP_TEMP_STACK_ON_ERROR` | unset | Keep temp directory if test fails |
| `WEBDRIVER_URL` | `http://localhost:4444` | WebDriver endpoint for browser automation |
| `LOG_LEVEL` | `info` | Logging level: debug, info, warn, error |
| `TEMP_STACK_TIMEOUT` | unset | Auto-shutdown timeout in seconds |
## Troubleshooting
### PostgreSQL fails to start
```bash
# Make sure PostgreSQL binaries are installed
which postgres initdb pg_isready
# Check if port 5433 is available
lsof -i :5433
# Initialize manually
initdb -D /tmp/botserver-test-*/postgres/data
```
### Redis fails to start
```bash
# Verify Redis is installed
which redis-server redis-cli
# Check if port 6380 is available
lsof -i :6380
```
### Cleanup issues
```bash
# Manually cleanup stale directories
rm -rf /tmp/botserver-test-*
# Keep temporary stack for debugging
KEEP_TEMP_STACK_ON_ERROR=1 cargo test --test e2e
```
### Check stack logs
```bash
# View BotServer logs
tail -f /tmp/botserver-test-{id}/botserver/logs/botserver.log
# View database logs
tail -f /tmp/botserver-test-{id}/postgres.log
# View all logs
ls -la /tmp/botserver-test-{id}/*/
```
## Benefits Summary
**Isolation** - Each test has its own environment
**Automation** - No manual setup required
**Reproducibility** - Same setup every time
**Safety** - Won't interfere with main development
**Cleanup** - Automatic resource management
**Debugging** - Can preserve stacks for investigation
**CI/CD Ready** - Perfect for automated testing pipelines
**Scalability** - Run multiple tests in parallel with port management