7.2 KiB
Code Standards
BotServer follows Rust best practices with a unique approach: all code is fully generated by LLMs following specific prompts and patterns.
LLM-Generated Code Policy
Core Principle
All source code in BotServer is generated by Large Language Models (LLMs). This ensures consistency, reduces human error, and leverages AI capabilities for optimal code generation.
Important Guidelines
- Comments are discouraged - Code should be self-documenting through clear naming and structure
- Comments may be deleted during optimization - Do not rely on comments for critical information
- Documentation should be external - Use README files and documentation chapters, not inline comments
- Your comments are dangerous - They can become outdated (desacoplado) and misleading
Why No Comments?
- LLM-generated code is consistently structured
- Function and variable names are descriptive
- External documentation is more maintainable
- Comments become stale and misleading over time
- Optimization passes may remove comments
Development Workflow
Follow the LLM workflow defined in /prompts/dev/platform/README.md:
LLM Strategy
- Sequential Development: One requirement at a time with sequential commits
- Fallback Strategy: After 3 attempts or 10 minutes, try different LLMs in sequence
- Error Handling: Stop on unresolved errors and consult alternative LLMs
- Warning Removal: Handle as last task before committing
- Final Validation: Use
cargo checkwith appropriate LLM
Code Generation Rules
From /prompts/dev/platform/botserver.md:
- Sessions must always be retrieved by id when session_id is present
- Never suggest installing software - bootstrap handles everything
- Configuration stored in
.gbot/configandbot_configurationtable
Rust Style Guide
Formatting
Use rustfmt for automatic formatting:
# Format all code
cargo fmt
# Check formatting without changes
cargo fmt -- --check
Configuration in .rustfmt.toml:
edition = "2021"
max_width = 100
use_small_heuristics = "Max"
Linting
Use clippy for code quality:
# Run clippy
cargo clippy -- -D warnings
# Fix clippy suggestions
cargo clippy --fix
Naming Conventions
General Rules
- snake_case: Functions, variables, modules
- PascalCase: Types, traits, enums
- SCREAMING_SNAKE_CASE: Constants
- 'lifetime: Lifetime parameters
Self-Documenting Names
Instead of comments, use descriptive names:
// BAD: Needs comment
fn proc(d: &str) -> Result<String> {
// Process user data
// ...
}
// GOOD: Self-documenting
fn process_user_registration_data(registration_form: &str) -> Result<String> {
// No comment needed
}
Code Organization
Module Structure
// mod.rs or lib.rs
pub mod user;
pub mod session;
pub mod auth;
// Re-exports
pub use user::User;
pub use session::Session;
Import Ordering
// 1. Standard library
use std::collections::HashMap;
use std::sync::Arc;
// 2. External crates
use tokio::sync::Mutex;
use uuid::Uuid;
// 3. Local crates
use crate::config::Config;
use crate::models::User;
// 4. Super/self
use super::utils;
use self::helper::*;
Documentation Strategy
External Documentation Only
// DON'T: Inline documentation comments
/// This function creates a user session
/// It takes a user_id and bot_id
/// Returns a Result with Session or Error
fn create_session(user_id: Uuid, bot_id: Uuid) -> Result<Session> {
// Implementation
}
// DO: Self-documenting code + external docs
fn create_user_session_for_bot(user_id: Uuid, bot_id: Uuid) -> Result<Session> {
// Implementation
}
// Document in chapter-10/api-reference.md instead
Where to Document
- README.md files for module overview
- Documentation chapters for detailed explanations
- API references in separate documentation files
- Architecture diagrams in documentation folders
- Prompt files in
/prompts/dev/for generation patterns
Error Handling
Use Result Types
fn read_configuration_file(path: &str) -> Result<String, std::io::Error> {
std::fs::read_to_string(path)
}
Custom Error Types
use thiserror::Error;
#[derive(Error, Debug)]
pub enum BotServerError {
#[error("Database connection failed: {0}")]
DatabaseConnection(#[from] diesel::result::Error),
#[error("Invalid configuration: {message}")]
InvalidConfiguration { message: String },
#[error("Network request failed")]
NetworkFailure(#[from] reqwest::Error),
}
Testing Standards
Test Naming
#[test]
fn user_creation_succeeds_with_valid_data() {
// Clear test name, no comments needed
}
#[test]
fn user_creation_fails_with_invalid_email() {
// Self-documenting test name
}
Test Organization
#[cfg(test)]
mod tests {
use super::*;
mod user_creation {
#[test]
fn with_valid_data() {}
#[test]
fn with_invalid_email() {}
}
mod user_authentication {
#[test]
fn with_correct_password() {}
#[test]
fn with_wrong_password() {}
}
}
Security Standards
Never Hardcode Secrets
let api_key = std::env::var("API_KEY")?;
let database_url = std::env::var("DATABASE_URL")?;
Validate Input
fn validate_and_sanitize_user_input(input: &str) -> Result<String> {
if input.len() > MAX_INPUT_LENGTH {
return Err(BotServerError::InputTooLong);
}
if !input.chars().all(char::is_alphanumeric) {
return Err(BotServerError::InvalidCharacters);
}
Ok(input.to_string())
}
Performance Guidelines
Use Iterators
let positive_doubled_sum: i32 = numbers
.iter()
.filter(|n| **n > 0)
.map(|n| n * 2)
.sum();
Avoid Unnecessary Allocations
fn process_text_without_allocation(text: &str) -> String {
text.to_uppercase()
}
LLM Prompt References
Key prompts for code generation are stored in /prompts/dev/:
- platform/botserver.md: Core platform rules
- platform/add-keyword.md: Adding new BASIC keywords
- platform/add-model.md: Integrating new LLM models
- platform/fix-errors.md: Error resolution patterns
- basic/doc-keyword.md: BASIC keyword documentation
Code Review Checklist
Before submitting LLM-generated code:
- Code compiles without warnings
- All tests pass
- Code is formatted with rustfmt
- Clippy passes without warnings
- NO inline comments (use external docs)
- Function/variable names are self-documenting
- No hardcoded secrets
- Error handling follows Result pattern
- Follows patterns from
/prompts/dev/
Summary
BotServer embraces AI-first development where:
- All code is LLM-generated following consistent patterns
- Comments are forbidden - code must be self-documenting
- Documentation lives externally in dedicated files
- Prompts define patterns in
/prompts/dev/ - Optimization may delete anything not in the actual code logic
This approach ensures consistency, maintainability, and leverages AI capabilities while avoiding the pitfalls of outdated comments and human inconsistencies.