- More general docs.

This commit is contained in:
Rodrigo Rodriguez (Pragmatismo) 2025-11-23 13:46:55 -03:00
parent 06b5e100dc
commit b680301c38
91 changed files with 4203 additions and 5346 deletions

View file

@ -25,7 +25,7 @@ This documentation has been **recently updated** to accurately reflect the actua
- Riot compiler module (`src/riot_compiler/`)
- Prompt manager (`src/prompt_manager/`)
- API endpoints and web server routes
- MinIO/S3 drive integration details
- Drive (S3-compatible) integration details
- Video conferencing (LiveKit) integration
---
@ -37,8 +37,8 @@ BotServer is an open-source conversational AI platform written in Rust. It enabl
- **BASIC Scripting**: Simple `.bas` scripts for conversation flows
- **Template Packages**: Organize bots as `.gbai` directories with dialogs, knowledge bases, and configuration
- **Vector Search**: Semantic document retrieval with Qdrant
- **LLM Integration**: OpenAI, local models, and custom providers
- **Auto-Bootstrap**: Automated installation of PostgreSQL, Redis, MinIO, and more
- **LLM Integration**: Local models, cloud APIs, and custom providers
- **Auto-Bootstrap**: Automated installation of PostgreSQL, cache, drive, and more
- **Multi-Bot Hosting**: Run multiple isolated bots on a single server
---
@ -66,7 +66,7 @@ BotServer is an open-source conversational AI platform written in Rust. It enabl
- [.gbkb Knowledge Base](chapter-02/gbkb.md) - Document collections
- [.gbot Configuration](chapter-02/gbot.md) - Bot parameters
- [.gbtheme UI Theming](chapter-02/gbtheme.md) - Web interface customization
- [.gbdrive File Storage](chapter-02/gbdrive.md) - MinIO/S3 integration
- [.gbdrive File Storage](chapter-02/gbdrive.md) - Drive (S3-compatible) integration
### Part III - Knowledge Base
- [Chapter 03: gbkb Reference](chapter-03/README.md) - Semantic search and vector database
@ -76,7 +76,7 @@ BotServer is an open-source conversational AI platform written in Rust. It enabl
### Part V - BASIC Dialogs
- [Chapter 05: gbdialog Reference](chapter-05/README.md) - Complete BASIC scripting reference
- Keywords: `TALK`, `HEAR`, `LLM`, `SET_CONTEXT`, `USE_KB`, and more
- Keywords: `TALK`, `HEAR`, `LLM`, `SET CONTEXT`, `USE KB`, and more
### Part VI - Extending BotServer
- [Chapter 06: Rust Architecture Reference](chapter-06/README.md) - Internal architecture
@ -123,9 +123,9 @@ BotServer is a **monolithic Rust application** (single crate) with the following
### Infrastructure
- `bootstrap` - Auto-installation of components
- `package_manager` - Manages PostgreSQL, Redis, MinIO, etc.
- `web_server` - Axum HTTP API and WebSocket
- `drive` - MinIO/S3 storage and vector DB
- `package_manager` - Manages PostgreSQL, cache, drive, etc.
- `web_server` - Axum HTTP REST API
- `drive` - S3-compatible storage and vector DB
- `config` - Environment configuration
### Features
@ -143,8 +143,8 @@ BotServer is a **monolithic Rust application** (single crate) with the following
- **Language**: Rust 2021 edition
- **Web**: Axum + Tower + Tokio
- **Database**: Diesel ORM + PostgreSQL
- **Cache**: Redis/Valkey
- **Storage**: AWS SDK S3 (MinIO)
- **Cache**: Valkey (Redis-compatible)
- **Storage**: AWS SDK S3 (drive component)
- **Vector DB**: Qdrant (optional)
- **Scripting**: Rhai engine
- **Security**: Argon2, AES-GCM

View file

@ -10,6 +10,7 @@
- [Installation](./chapter-01/installation.md)
- [First Conversation](./chapter-01/first-conversation.md)
- [Understanding Sessions](./chapter-01/sessions.md)
- [NVIDIA GPU Setup for LXC](./chapter-01/nvidia-gpu-setup.md)
# Part II - Package System
@ -41,6 +42,8 @@
- [Web Interface](./chapter-04/web-interface.md)
- [CSS Customization](./chapter-04/css.md)
- [HTML Templates](./chapter-04/html.md)
- [Desktop Mode](./chapter-04/desktop-mode.md)
- [Console Mode](./chapter-04/console-mode.md)
# Part V - BASIC Dialogs
@ -49,30 +52,28 @@
- [Universal Messaging & Multi-Channel](./chapter-05/universal-messaging.md)
- [Template Examples](./chapter-05/templates.md)
- [start.bas](./chapter-05/template-start.md)
- [auth.bas](./chapter-05/template-auth.md)
- [generate-summary.bas](./chapter-05/template-summary.md)
- [enrollment Tool Example](./chapter-05/template-enrollment.md)
- [Keyword Reference](./chapter-05/keywords.md)
- [TALK](./chapter-05/keyword-talk.md)
- [HEAR](./chapter-05/keyword-hear.md)
- [SET_USER](./chapter-05/keyword-set-user.md)
- [SET_CONTEXT](./chapter-05/keyword-set-context.md)
- [SET USER](./chapter-05/keyword-set-user.md)
- [SET CONTEXT](./chapter-05/keyword-set-context.md)
- [LLM](./chapter-05/keyword-llm.md)
- [GET_BOT_MEMORY](./chapter-05/keyword-get-bot-memory.md)
- [SET_BOT_MEMORY](./chapter-05/keyword-set-bot-memory.md)
- [USE_KB](./chapter-05/keyword-use-kb.md)
- [CLEAR_KB](./chapter-05/keyword-clear-kb.md)
- [ADD_WEBSITE](./chapter-05/keyword-add-website.md)
- [USE_TOOL](./chapter-05/keyword-use-tool.md)
- [CLEAR_TOOLS](./chapter-05/keyword-clear-tools.md)
- [GET BOT MEMORY](./chapter-05/keyword-get-bot-memory.md)
- [SET BOT MEMORY](./chapter-05/keyword-set-bot-memory.md)
- [USE KB](./chapter-05/keyword-use-kb.md)
- [CLEAR KB](./chapter-05/keyword-clear-kb.md)
- [ADD WEBSITE](./chapter-05/keyword-add-website.md)
- [USE TOOL](./chapter-05/keyword-use-tool.md)
- [CLEAR TOOLS](./chapter-05/keyword-clear-tools.md)
- [GET](./chapter-05/keyword-get.md)
- [FIND](./chapter-05/keyword-find.md)
- [SET](./chapter-05/keyword-set.md)
- [ON](./chapter-05/keyword-on.md)
- [SET_SCHEDULE](./chapter-05/keyword-set-schedule.md)
- [CREATE_SITE](./chapter-05/keyword-create-site.md)
- [CREATE_DRAFT](./chapter-05/keyword-create-draft.md)
- [CREATE_TASK](./chapter-05/keyword-create-task.md)
- [SET SCHEDULE](./chapter-05/keyword-set-schedule.md)
- [CREATE SITE](./chapter-05/keyword-create-site.md)
- [CREATE DRAFT](./chapter-05/keyword-create-draft.md)
- [CREATE TASK](./chapter-05/keyword-create-task.md)
- [PRINT](./chapter-05/keyword-print.md)
- [WAIT](./chapter-05/keyword-wait.md)
- [FORMAT](./chapter-05/keyword-format.md)
@ -80,14 +81,16 @@
- [LAST](./chapter-05/keyword-last.md)
- [FOR EACH](./chapter-05/keyword-for-each.md)
- [EXIT FOR](./chapter-05/keyword-exit-for.md)
- [ADD_MEMBER](./chapter-05/keyword-add-member.md)
- [ADD_SUGGESTION](./chapter-05/keyword-add-suggestion.md)
- [CLEAR_SUGGESTIONS](./chapter-05/keyword-clear-suggestions.md)
- [ADD MEMBER](./chapter-05/keyword-add-member.md)
- [ADD SUGGESTION](./chapter-05/keyword-add-suggestion.md)
- [CLEAR SUGGESTIONS](./chapter-05/keyword-clear-suggestions.md)
- [BOOK](./chapter-05/keyword-book.md)
- [REMEMBER](./chapter-05/keyword-remember.md)
- [SAVE_FROM_UNSTRUCTURED](./chapter-05/keyword-save-from-unstructured.md)
- [SEND_MAIL](./chapter-05/keyword-send-mail.md)
- [SAVE FROM UNSTRUCTURED](./chapter-05/keyword-save-from-unstructured.md)
- [SEND MAIL](./chapter-05/keyword-send-mail.md)
- [WEATHER](./chapter-05/keyword-weather.md)
- [FIND](./chapter-05/keyword-find.md)
- [CHANGE THEME](./chapter-05/keyword-change-theme.md)
# Part VI - Extending BotServer
@ -95,7 +98,6 @@
- [Architecture Overview](./chapter-06/architecture.md)
- [Building from Source](./chapter-06/building.md)
- [Container Deployment (LXC)](./chapter-06/containers.md)
- [SMB Deployment Guide](./chapter-06/smb-deployment.md)
- [Module Structure](./chapter-06/crates.md)
- [Service Layer](./chapter-06/services.md)
- [Creating Custom Keywords](./chapter-06/custom-keywords.md)
@ -107,10 +109,9 @@
- [Chapter 07: gbot Reference](./chapter-07/README.md)
- [config.csv Format](./chapter-07/config-csv.md)
- [Bot Parameters](./chapter-07/parameters.md)
- [Answer Modes](./chapter-07/answer-modes.md)
- [LLM Configuration](./chapter-07/llm-config.md)
- [Context Configuration](./chapter-07/context-config.md)
- [MinIO Drive Integration](./chapter-07/minio.md)
- [Drive Integration](./chapter-07/minio.md)
# Part VIII - Tools and Integration
@ -119,7 +120,7 @@
- [PARAM Declaration](./chapter-08/param-declaration.md)
- [Tool Compilation](./chapter-08/compilation.md)
- [MCP Format](./chapter-08/mcp-format.md)
- [OpenAI Tool Format](./chapter-08/openai-format.md)
- [Tool Format](./chapter-08/openai-format.md)
- [GET Keyword Integration](./chapter-08/get-integration.md)
- [External APIs](./chapter-08/external-apis.md)
@ -147,6 +148,7 @@
- [Testing](./chapter-10/testing.md)
- [Pull Requests](./chapter-10/pull-requests.md)
- [Documentation](./chapter-10/documentation.md)
- [IDE Extensions](./chapter-10/ide-extensions.md)
# Part XI - Authentication and Security
@ -194,9 +196,4 @@
- [Tables](./appendix-i/tables.md)
- [Relationships](./appendix-i/relationships.md)
- [Appendix II: Project Status](./appendix-ii/README.md)
- [Build Status](./appendix-ii/build-status.md)
- [Production Status](./appendix-ii/production-status.md)
- [Integration Status](./appendix-ii/integration-status.md)
[Glossary](./glossary.md)
[Glossary](./glossary.md)

View file

@ -182,7 +182,7 @@ These relationships are frequently traversed and should be optimized:
2. **bots → bot_memories**
- Index: (bot_id, key)
- Used by GET_BOT_MEMORY/SET_BOT_MEMORY
- Used by GET BOT MEMORY/SET BOT MEMORY
3. **kb_collections → kb_documents**
- Index: (collection_id, indexed)

View file

@ -38,7 +38,7 @@ Stores bot-specific configuration parameters from config.csv.
| updated_at | TIMESTAMPTZ | Last update timestamp |
### bot_memories
Persistent key-value storage for bots (used by GET_BOT_MEMORY/SET_BOT_MEMORY).
Persistent key-value storage for bots (used by GET BOT MEMORY/SET BOT MEMORY).
| Column | Type | Description |
|--------|------|-------------|

View file

@ -1,22 +0,0 @@
# Appendix II: Project Status
This appendix contains current project status information, build metrics, and integration tracking.
## Contents
- **[Build Status](./build-status.md)** - Current build status, completed tasks, and remaining issues
- **[Production Status](./production-status.md)** - Production readiness metrics and API endpoints
- **[Integration Status](./integration-status.md)** - Module integration tracking and feature matrix
## Purpose
These documents provide up-to-date information about the project's current state, helping developers and contributors understand:
- What's working and what needs attention
- Which features are production-ready
- Integration status of various modules
- Known issues and their fixes
## Note
These status documents are living documents that are updated frequently as the project evolves. For the most current information, always check the latest version in the repository.

View file

@ -1,221 +0,0 @@
# BotServer Build Status & Fixes
## Current Status
Build is failing with multiple issues that need to be addressed systematically.
## Completed Tasks ✅
1. **Security Features Documentation**
- Created comprehensive `docs/SECURITY_FEATURES.md`
- Updated `Cargo.toml` with detailed security feature documentation
- Added security-focused linting configuration
2. **Documentation Cleanup**
- Moved uppercase .md files to appropriate locations
- Deleted redundant implementation status files
- Created `docs/KB_AND_TOOLS.md` consolidating KB/Tool system documentation
- Created `docs/SMB_DEPLOYMENT_GUIDE.md` with pragmatic SMB examples
3. **Zitadel Auth Facade**
- Created `src/auth/facade.rs` with comprehensive auth abstraction
- Implemented `ZitadelAuthFacade` for enterprise deployments
- Implemented `SimpleAuthFacade` for SMB deployments
- Added `ZitadelClient` to `src/auth/zitadel.rs`
4. **Keyword Services API Layer**
- Created `src/api/keyword_services.rs` exposing keyword logic as REST APIs
- Services include: format, weather, email, task, search, memory, document processing
- Proper service-api-keyword pattern implementation
## Remaining Issues 🔧
### 1. Missing Email Module Functions
**Files affected:** `src/basic/keywords/create_draft.rs`, `src/basic/keywords/universal_messaging.rs`
**Issue:** Email module doesn't export expected functions
**Fix:**
- Add `EmailService` struct to `src/email/mod.rs`
- Implement `fetch_latest_sent_to` and `save_email_draft` functions
- Or stub them out with feature flags
### 2. Temporal Value Borrowing
**Files affected:** `src/basic/keywords/add_member.rs`
**Issue:** Temporary values dropped while borrowed in diesel bindings
**Fix:** Use let bindings for json! macro results before passing to bind()
### 3. Missing Channel Adapters
**Files affected:** `src/basic/keywords/universal_messaging.rs`
**Issue:** Instagram, Teams, WhatsApp adapters not properly exported
**Status:** Fixed - added exports to `src/channels/mod.rs`
### 4. Build Script Issue
**File:** `build.rs`
**Issue:** tauri_build runs even when desktop feature disabled
**Status:** Fixed - added feature gate
### 5. Missing Config Type
**Issue:** `Config` type referenced but not defined
**Fix:** Need to add `Config` type alias or struct to `src/config/mod.rs`
## Build Commands
### Minimal Build (No Features)
```bash
cargo build --no-default-features
```
### Email Feature Only
```bash
cargo build --no-default-features --features email
```
### Vector Database Feature
```bash
cargo build --no-default-features --features vectordb
```
### Full Desktop Build
```bash
cargo build --features "desktop,email,vectordb"
```
### Production Build
```bash
cargo build --release --features "email,vectordb"
```
## Quick Fixes Needed
### 1. Fix Email Service (src/email/mod.rs)
Add at end of file:
```rust
pub struct EmailService {
state: Arc<AppState>,
}
impl EmailService {
pub fn new(state: Arc<AppState>) -> Self {
Self { state }
}
pub async fn send_email(&self, to: &str, subject: &str, body: &str, cc: Option<Vec<String>>) -> Result<(), Box<dyn std::error::Error>> {
// Implementation
Ok(())
}
pub async fn send_email_with_attachment(&self, to: &str, subject: &str, body: &str, attachment: Vec<u8>, filename: &str) -> Result<(), Box<dyn std::error::Error>> {
// Implementation
Ok(())
}
}
pub async fn fetch_latest_sent_to(config: &EmailConfig, to: &str) -> Result<String, String> {
// Stub implementation
Ok(String::new())
}
pub async fn save_email_draft(config: &EmailConfig, draft: &SaveDraftRequest) -> Result<(), String> {
// Stub implementation
Ok(())
}
#[derive(Debug, Serialize, Deserialize)]
pub struct SaveDraftRequest {
pub to: String,
pub subject: String,
pub cc: Option<String>,
pub text: String,
}
```
### 2. Fix Config Type (src/config/mod.rs)
Add:
```rust
pub type Config = AppConfig;
```
### 3. Fix Temporal Borrowing (src/basic/keywords/add_member.rs)
Replace lines 250-254:
```rust
let permissions_json = json!({
"workspace_enabled": true,
"chat_enabled": true,
"file_sharing": true
});
.bind::<diesel::sql_types::Jsonb, _>(&permissions_json)
```
Replace line 442:
```rust
let now = Utc::now();
.bind::<diesel::sql_types::Timestamptz, _>(&now)
```
## Testing Strategy
1. **Unit Tests**
```bash
cargo test --no-default-features
cargo test --features email
cargo test --features vectordb
```
2. **Integration Tests**
```bash
cargo test --all-features --test '*'
```
3. **Clippy Lints**
```bash
cargo clippy --all-features -- -D warnings
```
4. **Security Audit**
```bash
cargo audit
```
## Feature Matrix
| Feature | Dependencies | Status | Use Case |
|---------|-------------|--------|----------|
| `default` | desktop | ✅ | Desktop application |
| `desktop` | tauri, tauri-plugin-* | ✅ | Desktop UI |
| `email` | imap, lettre | ⚠️ | Email integration |
| `vectordb` | qdrant-client | ✅ | Semantic search |
## Next Steps
1. **Immediate** (Block Build):
- Fix email module exports
- Fix config type alias
- Fix temporal borrowing issues
2. **Short Term** (Functionality):
- Complete email service implementation
- Test all keyword services
- Add missing channel adapter implementations
3. **Medium Term** (Quality):
- Add comprehensive tests
- Implement proper error handling
- Add monitoring/metrics
4. **Long Term** (Enterprise):
- Complete Zitadel integration
- Add multi-tenancy support
- Implement audit logging
## Development Notes
- Always use feature flags for optional functionality
- Prefer composition over inheritance for services
- Use Result types consistently for error handling
- Document all public APIs
- Keep SMB use case simple and pragmatic
## Contact
For questions about the build or architecture:
- Repository: https://github.com/GeneralBots/BotServer
- Team: engineering@pragmatismo.com.br

View file

@ -1,452 +0,0 @@
# BOTSERVER INTEGRATION STATUS
## 🎯 COMPLETE INTEGRATION PLAN - ACTIVATION STATUS
This document tracks the activation and exposure of all modules in the botserver system.
---
## ✅ COMPLETED ACTIVATIONS
### 1. **AUTH/ZITADEL.RS** - ⚠️ 80% COMPLETE
**Status:** Core implementation complete - Facade integration in progress
**Completed:**
- ✅ All structs made public and serializable (`ZitadelConfig`, `ZitadelUser`, `TokenResponse`, `IntrospectionResponse`)
- ✅ `ZitadelClient` and `ZitadelAuth` structs fully exposed with public fields
- ✅ All client methods made public (create_user, get_user, search_users, list_users, etc.)
- ✅ Organization management fully exposed
- ✅ User/org membership management public
- ✅ Role and permission management exposed
- ✅ User workspace structure fully implemented and public
- ✅ JWT token extraction utility exposed
- ✅ All methods updated to return proper Result types
**Remaining:**
- 🔧 Complete ZitadelAuthFacade integration (type mismatches with facade trait)
- 🔧 Test all Zitadel API endpoints
- 🔧 Add comprehensive error handling
**API Surface:**
```rust
pub struct ZitadelClient { /* full API */ }
pub struct ZitadelAuth { /* full API */ }
pub struct UserWorkspace { /* full API */ }
pub fn extract_user_id_from_token(token: &str) -> Result<String>
```
---
### 2. **CHANNELS/WHATSAPP.RS** - ⚠️ 60% COMPLETE
**Status:** All structures exposed, implementation needed
**Completed:**
- ✅ All WhatsApp structs made public and Clone-able
- ✅ Webhook structures exposed (`WhatsAppWebhook`, `WhatsAppMessage`)
- ✅ Message types fully defined (`WhatsAppIncomingMessage`, `WhatsAppText`, `WhatsAppMedia`, `WhatsAppLocation`)
- ✅ All entry/change/value structures exposed
- ✅ Contact and profile structures public
**Needs Implementation:**
- 🔧 Implement message sending methods
- 🔧 Implement webhook verification handler
- 🔧 Implement message processing handler
- 🔧 Connect to Meta WhatsApp Business API
- 🔧 Add router endpoints to main app
- 🔧 Implement media download/upload
**API Surface:**
```rust
pub struct WhatsAppMessage { /* ... */ }
pub struct WhatsAppIncomingMessage { /* ... */ }
pub fn create_whatsapp_router() -> Router
pub async fn send_whatsapp_message() -> Result<()>
```
---
### 3. **CHANNELS/INSTAGRAM.RS** - 📋 PENDING
**Status:** Not Started
**Required Actions:**
- [ ] Expose all Instagram structs
- [ ] Implement Meta Graph API integration
- [ ] Add Instagram Direct messaging
- [ ] Implement story/post interactions
- [ ] Connect router to main app
**API Surface:**
```rust
pub struct InstagramMessage { /* ... */ }
pub async fn send_instagram_dm() -> Result<()>
pub fn create_instagram_router() -> Router
```
---
### 4. **CHANNELS/TEAMS.RS** - 📋 PENDING
**Status:** Not Started
**Required Actions:**
- [ ] Expose all Teams structs
- [ ] Implement Microsoft Graph API integration
- [ ] Add Teams bot messaging
- [ ] Implement adaptive cards support
- [ ] Connect router to main app
**API Surface:**
```rust
pub struct TeamsMessage { /* ... */ }
pub async fn send_teams_message() -> Result<()>
pub fn create_teams_router() -> Router
```
---
### 5. **BASIC/COMPILER/MOD.RS** - 📋 PENDING
**Status:** Needs Exposure
**Required Actions:**
- [ ] Mark all compiler methods as `pub`
- [ ] Add `#[cfg(feature = "mcp-tools")]` guards
- [ ] Expose tool format definitions
- [ ] Make compiler infrastructure accessible
**API Surface:**
```rust
pub struct ToolCompiler { /* ... */ }
pub fn compile_tool_definitions() -> Result<Vec<Tool>>
pub fn validate_tool_schema() -> Result<()>
```
---
### 6. **DRIVE_MONITOR/MOD.RS** - 📋 PENDING
**Status:** Fields unused, needs activation
**Required Actions:**
- [ ] Use all struct fields properly
- [ ] Mark methods as `pub`
- [ ] Implement Google Drive API integration
- [ ] Add change monitoring
- [ ] Connect to vectordb
**API Surface:**
```rust
pub struct DriveMonitor { /* full fields */ }
pub async fn start_monitoring() -> Result<()>
pub async fn sync_drive_files() -> Result<()>
```
---
### 7. **MEET/SERVICE.RS** - 📋 PENDING
**Status:** Fields unused, needs activation
**Required Actions:**
- [ ] Use `connections` field for meeting management
- [ ] Mark voice/transcription methods as `pub`
- [ ] Implement meeting creation
- [ ] Add participant management
- [ ] Connect audio processing
**API Surface:**
```rust
pub struct MeetService { pub connections: HashMap<...> }
pub async fn create_meeting() -> Result<Meeting>
pub async fn start_transcription() -> Result<()>
```
---
### 8. **PACKAGE_MANAGER/SETUP/** - ⚠️ IN PROGRESS
**Status:** Structures exist, needs method exposure
#### Directory Setup
- ✅ Core directory setup exists
- [ ] Mark all methods as `pub`
- [ ] Keep `generate_directory_config`
- [ ] Expose setup infrastructure
#### Email Setup
- ✅ `EmailDomain` struct exists
- [ ] Mark all methods as `pub`
- [ ] Keep `generate_email_config`
- [ ] Full email setup activation
**API Surface:**
```rust
pub fn generate_directory_config() -> Result<DirectoryConfig>
pub fn generate_email_config() -> Result<EmailConfig>
pub struct EmailDomain { /* ... */ }
```
---
### 9. **CONFIG/MOD.RS** - ✅ 90% COMPLETE
**Status:** Most functionality already public
**Completed:**
- ✅ `sync_gbot_config` is already public
- ✅ Config type alias exists
- ✅ ConfigManager fully exposed
**Remaining:**
- [ ] Verify `email` field usage in `AppConfig`
- [ ] Add proper accessor methods if needed
**API Surface:**
```rust
pub type Config = AppConfig;
pub fn sync_gbot_config() -> Result<()>
impl AppConfig { pub fn email(&self) -> &EmailConfig }
```
---
### 10. **BOT/MULTIMEDIA.RS** - ✅ 100% COMPLETE
**Status:** Fully exposed and documented
**Completed:**
- ✅ `MultimediaMessage` enum is public with all variants
- ✅ All multimedia types exposed (Text, Image, Video, Audio, Document, WebSearch, Location, MeetingInvite)
- ✅ `SearchResult` struct public
- ✅ `MediaUploadRequest` and `MediaUploadResponse` public
- ✅ `MultimediaHandler` trait fully exposed
- ✅ All structures properly documented
**API Surface:**
```rust
pub enum MultimediaMessage { /* ... */ }
pub async fn process_image() -> Result<ProcessedImage>
pub async fn process_video() -> Result<ProcessedVideo>
```
---
### 11. **CHANNELS/MOD.RS** - 📋 PENDING
**Status:** Incomplete implementation
**Required Actions:**
- [ ] Implement `send_message` fully
- [ ] Use `connections` field properly
- [ ] Mark voice methods as `pub`
- [ ] Complete channel abstraction
**API Surface:**
```rust
pub async fn send_message(channel: Channel, msg: Message) -> Result<()>
pub async fn start_voice_call() -> Result<VoiceConnection>
```
---
### 12. **AUTH/MOD.RS** - 📋 PENDING
**Status:** Needs enhancement
**Required Actions:**
- [ ] Keep Zitadel-related methods
- [ ] Use `facade` field properly
- [ ] Enhance SimpleAuth implementation
- [ ] Complete auth abstraction
**API Surface:**
```rust
pub struct AuthManager { pub facade: Box<dyn AuthFacade> }
pub async fn authenticate() -> Result<AuthResult>
```
---
### 13. **BASIC/KEYWORDS/WEATHER.RS** - ✅ 100% COMPLETE
**Status:** Fully exposed and functional
**Completed:**
- ✅ `WeatherData` struct made public and Clone-able
- ✅ `fetch_weather` function exposed as public
- ✅ `parse_location` function exposed as public
- ✅ Weather API integration complete (7Timer!)
- ✅ Keyword registration exists
**API Surface:**
```rust
pub async fn get_weather(location: &str) -> Result<Weather>
pub async fn get_forecast(location: &str) -> Result<Forecast>
```
---
### 14. **SESSION/MOD.RS** - ✅ 100% COMPLETE
**Status:** Fully exposed session management
**Completed:**
- ✅ `provide_input` is already public
- ✅ `update_session_context` is already public
- ✅ SessionManager fully exposed
- ✅ Session management API complete
**API Surface:**
```rust
pub async fn provide_input(session: &mut Session, input: Input) -> Result<()>
pub async fn update_session_context(session: &mut Session, ctx: Context) -> Result<()>
```
---
### 15. **LLM/LOCAL.RS** - ✅ 100% COMPLETE
**Status:** Fully exposed and functional
**Completed:**
- ✅ All functions are already public
- ✅ `chat_completions_local` endpoint exposed
- ✅ `embeddings_local` endpoint exposed
- ✅ `ensure_llama_servers_running` public
- ✅ `start_llm_server` and `start_embedding_server` public
- ✅ Server health checking exposed
**API Surface:**
```rust
pub async fn generate_local(prompt: &str) -> Result<String>
pub async fn embed_local(text: &str) -> Result<Vec<f32>>
```
---
### 16. **LLM_MODELS/MOD.RS** - ✅ 100% COMPLETE
**Status:** Fully exposed model handlers
**Completed:**
- ✅ `ModelHandler` trait is public
- ✅ `get_handler` function is public
- ✅ All model implementations exposed (gpt_oss_20b, gpt_oss_120b, deepseek_r3)
- ✅ Analysis utilities accessible
**API Surface:**
```rust
pub fn list_available_models() -> Vec<ModelInfo>
pub async fn analyze_with_model(model: &str, input: &str) -> Result<Analysis>
```
---
### 17. **NVIDIA/MOD.RS** - ✅ 100% COMPLETE
**Status:** Fully exposed monitoring system
**Completed:**
- ✅ `SystemMetrics` struct public with `gpu_usage` and `cpu_usage` fields
- ✅ `get_system_metrics` function public
- ✅ `has_nvidia_gpu` function public
- ✅ `get_gpu_utilization` function public
- ✅ Full GPU/CPU monitoring exposed
**API Surface:**
```rust
pub struct NvidiaMonitor { pub gpu_usage: f32, pub cpu_usage: f32 }
pub async fn get_gpu_stats() -> Result<GpuStats>
```
---
### 18. **BASIC/KEYWORDS/USE_KB.RS** - ✅ 100% COMPLETE
**Status:** Fully exposed knowledge base integration
**Completed:**
- ✅ `ActiveKbResult` struct made public with all fields public
- ✅ `get_active_kbs_for_session` is already public
- ✅ Knowledge base activation exposed
- ✅ Session KB associations accessible
**API Surface:**
```rust
pub struct ActiveKbResult { /* ... */ }
pub async fn get_active_kbs_for_session(session: &Session) -> Result<Vec<Kb>>
```
---
## 🔧 INTEGRATION CHECKLIST
### Phase 1: Critical Infrastructure (Priority 1)
- [ ] Complete Zitadel integration
- [ ] Expose all channel interfaces
- [ ] Activate session management
- [ ] Enable auth facade
### Phase 2: Feature Modules (Priority 2)
- [ ] Activate all keyword handlers
- [ ] Enable multimedia processing
- [ ] Expose compiler infrastructure
- [ ] Connect drive monitoring
### Phase 3: Advanced Features (Priority 3)
- [ ] Enable meeting services
- [ ] Activate NVIDIA monitoring
- [ ] Complete knowledge base integration
- [ ] Expose local LLM
### Phase 4: Complete Integration (Priority 4)
- [ ] Connect all routers to main app
- [ ] Test all exposed APIs
- [ ] Document all public interfaces
- [ ] Verify 0 warnings compilation
---
## 📊 OVERALL PROGRESS
**Total Modules:** 18
**Fully Completed:** 8 (Multimedia, Weather, Session, LLM Local, LLM Models, NVIDIA, Use KB, Config)
**Partially Complete:** 2 (Zitadel 80%, WhatsApp 60%)
**In Progress:** 1 (Package Manager Setup)
**Pending:** 7 (Instagram, Teams, Compiler, Drive Monitor, Meet Service, Channels Core, Auth Core)
**Completion:** ~50%
**Target:** 100% - All modules activated, exposed, and integrated with 0 warnings
---
## 🚀 NEXT STEPS
### Immediate Priorities:
1. **Fix Zitadel Facade** - Complete type alignment in `ZitadelAuthFacade`
2. **Complete WhatsApp** - Implement handlers and connect to Meta API
3. **Activate Instagram** - Build full Instagram Direct messaging support
4. **Activate Teams** - Implement Microsoft Teams bot integration
### Secondary Priorities:
5. **Expose Compiler** - Make tool compiler infrastructure accessible
6. **Activate Drive Monitor** - Complete Google Drive integration
7. **Activate Meet Service** - Enable meeting and transcription features
8. **Complete Package Manager** - Expose all setup utilities
### Testing Phase:
9. Test all exposed APIs
10. Verify 0 compiler warnings
11. Document all public interfaces
12. Create integration examples
---
## 📝 NOTES
- All structs should be `pub` and `Clone` when possible
- All key methods must be `pub`
- Use `#[cfg(feature = "...")]` for optional features
- Ensure proper error handling in all public APIs
- Document all public interfaces
- Test thoroughly before marking as complete
**Goal:** Enterprise-grade, fully exposed, completely integrated bot platform with 0 compiler warnings.
---
## 🎉 MAJOR ACHIEVEMENTS
1. **8 modules fully activated** - Nearly half of all modules now completely exposed
2. **Zero-warning compilation** for completed modules
3. **Full API exposure** - All key utilities (weather, LLM, NVIDIA, KB) accessible
4. **Enterprise-ready** - Session management, config, and multimedia fully functional
5. **Strong foundation** - 80% of Zitadel auth complete, channels infrastructure ready
**Next Milestone:** 100% completion with full channel integration and 0 warnings across entire codebase.

View file

@ -1,308 +0,0 @@
# 🚀 BotServer v6.0.8 - Production Status
**Last Updated:** 2024
**Build Status:** ✅ SUCCESS
**Production Ready:** YES
---
## 📊 Build Metrics
```
Compilation: ✅ SUCCESS (0 errors)
Warnings: 82 (all Tauri desktop UI - intentional)
Test Status: ✅ PASSING
Lint Status: ✅ CONFIGURED (Clippy pedantic + nursery)
Code Quality: ✅ ENTERPRISE GRADE
```
---
## 🎯 Key Achievements
### ✅ Zero Compilation Errors
- All code compiles successfully
- No placeholder implementations
- Real, working integrations
### ✅ Full Channel Integration
- **Web Channel** - WebSocket support
- **Voice Channel** - LiveKit integration
- **Microsoft Teams** - Webhook + Adaptive Cards
- **Instagram** - Direct messages + media
- **WhatsApp Business** - Business API + templates
### ✅ OAuth2/OIDC Authentication
- Zitadel provider integrated
- User workspace management
- Token refresh handling
- Session persistence
### ✅ Advanced Features
- Semantic LLM caching (Redis + embeddings)
- Meeting/video conferencing (LiveKit)
- Drive monitoring (S3 sync)
- Multimedia handling (images/video/audio)
- Email processing (Stalwart integration)
---
## 🌐 Active API Endpoints
### Authentication
```
GET /api/auth/login OAuth2 login
GET /api/auth/callback OAuth2 callback
GET /api/auth Anonymous auth
```
### Channels
```
POST /api/teams/messages Teams webhook
GET /api/instagram/webhook Instagram verification
POST /api/instagram/webhook Instagram messages
GET /api/whatsapp/webhook WhatsApp verification
POST /api/whatsapp/webhook WhatsApp messages
GET /ws WebSocket connection
```
### Meetings & Voice
```
POST /api/meet/create Create meeting
POST /api/meet/token Get meeting token
POST /api/meet/invite Send invites
GET /ws/meet Meeting WebSocket
POST /api/voice/start Start voice session
POST /api/voice/stop Stop voice session
```
### Sessions & Bots
```
POST /api/sessions Create session
GET /api/sessions List sessions
GET /api/sessions/{id}/history Get history
POST /api/sessions/{id}/start Start session
POST /api/bots Create bot
POST /api/bots/{id}/mount Mount bot
POST /api/bots/{id}/input Send input
```
### Email (feature: email)
```
GET /api/email/accounts List accounts
POST /api/email/accounts/add Add account
POST /api/email/send Send email
POST /api/email/list List emails
```
### Files
```
POST /api/files/upload/{path} Upload to S3
```
---
## ⚙️ Configuration
### Required Environment Variables
```env
# Database
DATABASE_URL=postgresql://user:pass@localhost/botserver
# Redis (optional but recommended)
REDIS_URL=redis://localhost:6379
# S3/MinIO
AWS_ACCESS_KEY_ID=your_key
AWS_SECRET_ACCESS_KEY=your_secret
AWS_ENDPOINT=http://localhost:9000
AWS_BUCKET=default.gbai
# OAuth (optional)
ZITADEL_ISSUER_URL=https://your-zitadel.com
ZITADEL_CLIENT_ID=your_client_id
ZITADEL_CLIENT_SECRET=your_secret
ZITADEL_REDIRECT_URI=https://yourapp.com/api/auth/callback
# Teams (optional)
TEAMS_APP_ID=your_app_id
TEAMS_APP_PASSWORD=your_password
# Instagram (optional)
INSTAGRAM_ACCESS_TOKEN=your_token
INSTAGRAM_VERIFY_TOKEN=your_verify_token
# WhatsApp (optional)
WHATSAPP_ACCESS_TOKEN=your_token
WHATSAPP_VERIFY_TOKEN=your_verify_token
WHATSAPP_PHONE_NUMBER_ID=your_phone_id
```
---
## 🏗️ Architecture
### Core Components
1. **Bot Orchestrator**
- Session management
- Multi-channel routing
- LLM integration
- Multimedia handling
2. **Channel Adapters**
- Web (WebSocket)
- Voice (LiveKit)
- Teams (Bot Framework)
- Instagram (Graph API)
- WhatsApp (Business API)
3. **Authentication**
- OAuth2/OIDC (Zitadel)
- Anonymous users
- Session persistence
4. **Storage**
- PostgreSQL (sessions, users, bots)
- Redis (cache, sessions)
- S3/MinIO (files, media)
5. **LLM Services**
- OpenAI-compatible API
- Semantic caching
- Token estimation
- Stream responses
---
## 📝 Remaining Warnings
**82 warnings - ALL INTENTIONAL**
All warnings are for Tauri desktop UI commands:
- `src/ui/sync.rs` - Local sync management for system tray (4 warnings)
- `src/ui/sync.rs` - Rclone sync (8 warnings)
- Other desktop UI helpers
These are `#[tauri::command]` functions called by the JavaScript frontend, not by the Rust server. They cannot be eliminated without breaking desktop functionality.
**Documented in:** `src/ui/mod.rs`
---
## 🚀 Deployment
### Build for Production
```bash
cargo build --release
```
### Run Server
```bash
./target/release/botserver
```
### Run with Desktop UI
```bash
cargo tauri build
```
### Docker
```bash
docker build -t botserver:latest .
docker run -p 3000:3000 botserver:latest
```
---
## 🧪 Testing
### Run All Tests
```bash
cargo test
```
### Check Code Quality
```bash
cargo clippy --all-targets --all-features
```
### Format Code
```bash
cargo fmt
```
---
## 📚 Documentation
- **ENTERPRISE_INTEGRATION_COMPLETE.md** - Full integration guide
- **ZERO_WARNINGS_ACHIEVEMENT.md** - Development journey
- **CHANGELOG.md** - Version history
- **CONTRIBUTING.md** - Contribution guidelines
- **README.md** - Getting started
---
## 🎊 Production Checklist
- [x] Zero compilation errors
- [x] All channels integrated
- [x] OAuth2 authentication
- [x] Session management
- [x] LLM caching
- [x] Meeting services
- [x] Error handling
- [x] Logging configured
- [x] Environment validation
- [x] Database migrations
- [x] S3 integration
- [x] Redis fallback
- [x] CORS configured
- [x] Rate limiting ready
- [x] Documentation complete
---
## 💡 Quick Start
1. **Install Dependencies**
```bash
cargo build
```
2. **Setup Database**
```bash
diesel migration run
```
3. **Configure Environment**
```bash
cp .env.example .env
# Edit .env with your credentials
```
4. **Run Server**
```bash
cargo run
```
5. **Access Application**
```
http://localhost:3000
```
---
## 🤝 Support
- **GitHub:** https://github.com/GeneralBots/BotServer
- **Documentation:** See docs/ folder
- **Issues:** GitHub Issues
- **License:** AGPL-3.0
---
**Status:** READY FOR PRODUCTION 🚀
**Last Build:** SUCCESS ✅
**Next Release:** v6.1.0 (planned)

View file

@ -20,22 +20,39 @@ This chapter covers everything you need to get started:
1. **[Installation](./installation.md)** - How the automatic bootstrap works
2. **[First Conversation](./first-conversation.md)** - Start chatting with your bot
3. **[Understanding Sessions](./sessions.md)** - How conversations are managed
3. **[Quick Start](./quick-start.md)** - Create your first bot
## The Bootstrap Magic
When you first run BotServer, it automatically:
- ✅ Detects your operating system
- ✅ Installs PostgreSQL database
- ✅ Installs MinIO object storage
- ✅ Installs Valkey cache
- ✅ Downloads and installs PostgreSQL database
- ✅ Downloads and installs drive (S3-compatible object storage)
- ✅ Downloads and installs Valkey cache
- ✅ Downloads LLM models to botserver-stack/
- ✅ Generates secure credentials
- ✅ Creates default bots
- ✅ Starts the web server
**No manual configuration needed!** Everything just works.
### Optional Components
After bootstrap, you can install additional services:
- **Stalwart** - Full-featured email server for sending/receiving
- **Zitadel** - Identity and access management (directory service)
- **LiveKit** - Real-time video/audio conferencing
- **Additional LLM models** - For offline operation
```bash
./botserver install email # Stalwart email server
./botserver install directory # Zitadel identity provider
./botserver install meeting # LiveKit conferencing
./botserver install llm # Local LLM models
```
## Your First Bot
After bootstrap completes (2-5 minutes), open your browser to:
@ -46,6 +63,11 @@ http://localhost:8080
You'll see the default bot ready to chat! Just start talking - the LLM handles everything.
For specific bots like the enrollment example below:
```
http://localhost:8080/edu
```
## The Magic Formula
```
@ -57,7 +79,9 @@ You'll see the default bot ready to chat! Just start talking - the LLM handles e
2. Create simple tools as `.bas` files (optional)
3. Start chatting - the LLM does the rest!
## Example: Student Enrollment Bot
## Example: Student Enrollment Bot (EDU)
Deploy a new bot by creating a bucket in the object storage drive. Access it at `/edu`:
### 1. Add Course Documents
@ -71,6 +95,8 @@ edu.gbai/
### 2. Create Enrollment Tool
Deploy a bot by creating a new bucket in the drive. Tools are `.bas` files:
`edu.gbdialog/enrollment.bas`:
```bas
@ -121,14 +147,14 @@ Each conversation is a **session** that persists:
- Context and variables
- Active tools and knowledge bases
Sessions automatically save to PostgreSQL and cache in Redis for performance.
Sessions automatically save to PostgreSQL and cache in Valkey for performance.
## Next Steps
- **[Installation](./installation.md)** - Understand the bootstrap process
- **[First Conversation](./first-conversation.md)** - Try out your bot
- **[Understanding Sessions](./sessions.md)** - Learn about conversation state
- **[About Packages](../chapter-02/README.md)** - Create your own bots
- **[Quick Start](./quick-start.md)** - Build your own bot
- **[About Packages](../chapter-02/README.md)** - Create bot packages
## Philosophy

View file

@ -177,21 +177,28 @@ Traditional chatbots require complex logic:
' ❌ OLD WAY - DON'T DO THIS!
IF user_input CONTAINS "enroll" THEN
TALK "What's your name?"
HEAR name
TALK "What's your email?"
HEAR email
' ... lots more code ...
ENDIF
```
' ❌ OLD WAY - Complex multi-step dialog
IF intent = "enrollment" THEN
TALK "Let me help you enroll. What's your name?"
HEAR name
TALK "What's your email?"
HEAR email
' ... lots more code ...
ENDIF
```
With BotServer:
With BotServer:
```bas
' ✅ NEW WAY - Just create the tool!
PARAM name AS string
PARAM email AS string
DESCRIPTION "Enrollment tool"
SAVE "enrollments.csv", name, email
```bas
' ✅ NEW WAY - Just create the tool!
' In enrollment.bas - becomes a tool automatically
PARAM name AS string
PARAM email AS string
DESCRIPTION "Collects enrollment information"
' The tool is called by LLM when needed
SAVE "enrollments.csv", name, email
TALK "Successfully enrolled " + name
```
The LLM handles all the conversation logic!
@ -201,7 +208,7 @@ The LLM handles all the conversation logic!
### Customer Support Bot
- Add product manuals to `.gbkb/`
- Create `create-ticket.bas` tool
- LLM answers questions and creates support tickets
- LLM answers questions and creates support tickets automatically
### HR Assistant
- Add employee handbook to `.gbkb/`
@ -225,9 +232,10 @@ The LLM handles all the conversation logic!
The LLM can load tools based on context:
```bas
' In start.bas - minimal setup
USE_KB "general" ' Load general knowledge base
' Tools are auto-discovered from .gbdialog/ folder
' In start.bas - minimal setup, no HEAR needed
USE KB "general" ' Load general knowledge base
' Tools in .gbdialog/ are auto-discovered
' LLM handles the conversation naturally
```
### Multi-Language Support
@ -291,7 +299,7 @@ Don't try to control every aspect of the conversation. Let the LLM:
## Next Steps
- [Understanding Sessions](./sessions.md) - How conversations persist
- [Quick Start](./quick-start.md) - Build your first bot
- [About Packages](../chapter-02/README.md) - Package structure
- [Tool Definition](../chapter-08/tool-definition.md) - Creating tools
- [Knowledge Base](../chapter-03/README.md) - Document management

View file

@ -2,106 +2,124 @@
This guide covers the installation and setup of BotServer on various platforms.
## Prerequisites
- **Rust** 1.70+ (for building from source)
- **PostgreSQL** 14+ (for database)
- **Docker** (optional, for containerized deployment)
- **Git** (for cloning the repository)
## System Requirements
### Minimum Requirements
- **OS**: Linux, macOS, or Windows
- **RAM**: Minimum 4GB, recommended 8GB+
- **RAM**: 4GB minimum
- **Disk**: 10GB for installation + data storage
- **CPU**: 2+ cores recommended
- **CPU**: 1 core (sufficient for development/testing)
## Installation Methods
### Recommended for Production
- **OS**: Linux server (Ubuntu/Debian preferred)
- **RAM**: 16GB or more
- **Disk**: 100GB SSD storage
- **CPU**: 2+ cores
- **GPU**: RTX 3060 or better (12GB VRAM minimum) for local LLM hosting
### 1. Quick Start with Docker
## Quick Start
BotServer handles all dependencies automatically:
```bash
# Clone the repository
git clone https://github.com/yourusername/botserver
cd botserver
# Download and run
./botserver
# Start all services
docker-compose up -d
# Or build from source
cargo run
```
### 2. Build from Source
The bootstrap process automatically downloads everything to `botserver-stack/`:
- PostgreSQL database
- Drive (S3-compatible object storage)
- Valkey cache
- LLM server and models
- All required dependencies
```bash
# Clone the repository
git clone https://github.com/yourusername/botserver
cd botserver
# Build the project
cargo build --release
# Run the server
./target/release/botserver
```
### 3. Package Manager Installation
```bash
# Initialize package manager
botserver init
# Install required components
botserver install tables
botserver install cache
botserver install drive
botserver install llm
# Start services
botserver start all
```
**No manual installation required!**
## Environment Variables
BotServer uses only two environment variables:
The `.env` file is **automatically generated** during bootstrap from a blank environment with secure random credentials.
### Required Variables
### Automatic Generation (Bootstrap Mode)
When you first run `./botserver`, it creates `.env` with:
```bash
# Database connection string
DATABASE_URL=postgres://gbuser:password@localhost:5432/botserver
# Object storage configuration
# Auto-generated secure credentials
DATABASE_URL=postgres://gbuser:RANDOM_PASS@localhost:5432/botserver
DRIVE_SERVER=http://localhost:9000
DRIVE_ACCESSKEY=gbdriveuser
DRIVE_SECRET=your_secret_key
DRIVE_ACCESSKEY=GENERATED_KEY
DRIVE_SECRET=GENERATED_SECRET
```
**Important**: These are the ONLY environment variables used by BotServer. All other configuration is managed through:
- `config.csv` files in bot packages
- Database configuration tables
- Command-line arguments
### Using Existing Services
If you already have PostgreSQL or drive storage running, you can point to them:
```bash
# Point to your existing PostgreSQL
DATABASE_URL=postgres://myuser:mypass@myhost:5432/mydb
# Point to your existing drive/S3
DRIVE_SERVER=http://my-drive:9000
DRIVE_ACCESSKEY=my-access-key
DRIVE_SECRET=my-secret-key
```
## Configuration
### Bot Configuration
### Bot Configuration Parameters
Each bot has its own `config.csv` file with parameters like:
Each bot has a `config.csv` file in its `.gbot/` directory. Available parameters:
#### Server Configuration
```csv
name,value
server_host,0.0.0.0
server_port,8080
llm-url,http://localhost:8081
llm-model,path/to/model.gguf
email-from,from@domain.com
email-server,mail.domain.com
sites_root,/tmp
```
See the [Configuration Guide](../chapter-02/gbot.md) for complete parameter reference.
#### LLM Configuration
```csv
name,value
llm-key,none
llm-url,http://localhost:8081
llm-model,../../../../data/llm/model.gguf
llm-cache,false # Semantic cache (needs integration)
llm-cache-ttl,3600 # Cache TTL in seconds (needs integration)
llm-cache-semantic,true # Enable semantic matching (needs integration)
llm-cache-threshold,0.95 # Similarity threshold (needs integration)
```
### Theme Configuration
#### LLM Server Settings
```csv
name,value
llm-server,false
llm-server-path,botserver-stack/bin/llm/build/bin
llm-server-host,0.0.0.0
llm-server-port,8081
llm-server-gpu-layers,0
llm-server-n-moe,0
llm-server-ctx-size,4096
llm-server-n-predict,1024
llm-server-parallel,6
llm-server-cont-batching,true
```
Themes are configured through simple parameters in `config.csv`:
#### Email Configuration
```csv
name,value
email-from,from@domain.com
email-server,mail.domain.com
email-port,587
email-user,user@domain.com
email-pass,yourpassword
```
#### Theme Configuration
```csv
name,value
theme-color1,#0d2b55
@ -110,93 +128,145 @@ theme-title,My Bot
theme-logo,https://example.com/logo.svg
```
## Database Setup
### Automatic Setup
```bash
# Bootstrap command creates database and tables
botserver bootstrap
#### Prompt Configuration
```csv
name,value
prompt-history,2
prompt-compact,4
```
### Manual Setup
## Bootstrap Process
```sql
-- Create database
CREATE DATABASE botserver;
When you first run BotServer, it:
-- Create user
CREATE USER gbuser WITH PASSWORD 'your_password';
1. **Detects your system** - Identifies OS and architecture
2. **Creates directories** - Sets up `botserver-stack/` structure
3. **Downloads components** - Gets all required binaries
4. **Configures services** - Sets up database, storage, and cache
5. **Initializes database** - Creates tables and initial data
6. **Deploys default bot** - Creates a working bot instance
7. **Starts services** - Launches all components
-- Grant permissions
GRANT ALL PRIVILEGES ON DATABASE botserver TO gbuser;
```
Then run migrations:
```bash
diesel migration run
```
This typically takes 2-5 minutes on first run.
## Storage Setup
BotServer uses S3-compatible object storage (MinIO by default):
BotServer uses S3-compatible object storage. Each bot deployment creates a new bucket in the drive:
```bash
# Install MinIO
botserver install drive
# Start MinIO
botserver start drive
# Deploy a new bot = create a new bucket in drive
# Bots are stored in the object storage, not local filesystem
mybot.gbai → creates 'mybot' bucket in drive storage
```
Default MinIO console: http://localhost:9001
- Username: `minioadmin`
- Password: `minioadmin`
**Note**: The `work/` folder is for internal use only and should not be used for bot deployment. Bot packages should be deployed directly to the object storage (drive).
The storage server runs on:
- API: http://localhost:9000
- Console: http://localhost:9001
### Local Development with S3 Sync Tools
You can edit your bot files locally and have them automatically sync to drive storage using S3-compatible tools:
#### Free S3 Sync Tools
- **Cyberduck** (Windows/Mac/Linux) - GUI file browser with S3 support
- **WinSCP** (Windows) - File manager with S3 protocol support
- **Mountain Duck** (Windows/Mac) - Mount S3 as local drive
- **S3 Browser** (Windows) - Freeware S3 client
- **CloudBerry Explorer** (Windows/Mac) - Free version available
- **rclone** (All platforms) - Command-line sync tool
#### Setup Example with rclone
```bash
# Configure rclone for drive storage
rclone config
# Choose: n) New remote
# Name: drive
# Storage: s3
# Provider: Other
# Access Key: (from .env DRIVE_ACCESSKEY)
# Secret Key: (from .env DRIVE_SECRET)
# Endpoint: http://localhost:9000
# Sync local folder to bucket (watches for changes)
rclone sync ./mybot.gbai drive:mybot --watch
# Now edit files locally:
# - Edit mybot.gbai/mybot.gbot/config.csv → Bot reloads automatically
# - Edit mybot.gbai/mybot.gbdialog/*.bas → Scripts compile automatically
# - Add docs to mybot.gbai/mybot.gbkb/ → Knowledge base updates automatically
```
With this setup:
- ✅ Edit `.csv` files → Bot configuration updates instantly
- ✅ Edit `.bas` files → BASIC scripts compile automatically
- ✅ Add documents → Knowledge base reindexes automatically
- ✅ No manual uploads needed
- ✅ Works like local development but uses object storage
## Database Setup
PostgreSQL is automatically configured with:
- Database: `botserver`
- User: `gbuser`
- Tables created via migrations
- Connection pooling enabled
## Authentication Setup
BotServer uses an external directory service for authentication:
```bash
# Install directory service
botserver install directory
# Start directory
botserver start directory
```
The directory service handles:
- User authentication
- OAuth2/OIDC flows
- User management
- Access control
BotServer uses an external directory service for user management:
- Handles user authentication
- Manages OAuth2/OIDC flows
- Controls access permissions
- Integrates with existing identity providers
## LLM Setup
### Local LLM Server
### Local LLM (Recommended)
```bash
# Install LLM server
botserver install llm
The bootstrap downloads a default model to `botserver-stack/data/llm/`. Configure in `config.csv`:
# Download a model
wget https://huggingface.co/models/your-model.gguf -O data/llm/model.gguf
# Configure in config.csv
```csv
name,value
llm-url,http://localhost:8081
llm-model,data/llm/model.gguf
llm-model,../../../../data/llm/model.gguf
llm-server-gpu-layers,0
```
For GPU acceleration (RTX 3060 or better):
```csv
name,value
llm-server-gpu-layers,35
```
### External LLM Provider
Configure in `config.csv`:
To use external APIs, configure in `config.csv`:
```csv
name,value
llm-url,https://api.openai.com/v1
llm-url,https://api.provider.com/v1
llm-key,your-api-key
llm-model,gpt-4
llm-model,model-name
```
## Container Deployment (LXC)
For production isolation using Linux containers:
```bash
# Create container
lxc-create -n botserver -t download -- -d ubuntu -r jammy -a amd64
# Start container
lxc-start -n botserver
# Attach to container
lxc-attach -n botserver
# Install BotServer inside container
./botserver
```
## Verifying Installation
@ -211,7 +281,7 @@ botserver status
psql $DATABASE_URL -c "SELECT version();"
# Test storage
curl http://localhost:9000/minio/health/live
curl http://localhost:9000/health/live
# Test LLM
curl http://localhost:8081/v1/models
@ -220,58 +290,57 @@ curl http://localhost:8081/v1/models
### Run Test Bot
```bash
# Create a test bot
cp -r templates/default.gbai work/test.gbai
# Start the server
botserver run
# The default bot is automatically deployed to the drive during bootstrap
# Access web interface
open http://localhost:8080
```
To deploy additional bots, upload them to the object storage, not the local filesystem. The `work/` folder is reserved for internal operations.
## Troubleshooting
### Database Connection Issues
```bash
# Check PostgreSQL is running
systemctl status postgresql
# Check if PostgreSQL is running
ps aux | grep postgres
# Test connection
psql -h localhost -U gbuser -d botserver
# Check DATABASE_URL format
# Verify DATABASE_URL
echo $DATABASE_URL
```
### Storage Connection Issues
```bash
# Check MinIO is running
docker ps | grep minio
# Check drive process
ps aux | grep minio
# Test credentials
aws s3 ls --endpoint-url=$DRIVE_SERVER
# Test storage access
curl -I $DRIVE_SERVER/health/live
```
### Port Conflicts
Default ports used by BotServer:
Default ports used:
| Service | Port | Configure in |
|---------|------|--------------|
| Web Server | 8080 | config.csv: `server_port` |
| PostgreSQL | 5432 | DATABASE_URL |
| MinIO | 9000/9001 | DRIVE_SERVER |
| Drive API | 9000 | DRIVE_SERVER |
| Drive Console | 9001 | N/A |
| LLM Server | 8081 | config.csv: `llm-server-port` |
| Cache (Valkey) | 6379 | Internal |
| Embedding Server | 8082 | config.csv: `embedding-url` |
| Valkey Cache | 6379 | Internal |
### Memory Issues
For systems with limited RAM:
1. Reduce LLM context size in `config.csv`:
1. Reduce LLM context size:
```csv
llm-server-ctx-size,2048
```
@ -281,11 +350,24 @@ For systems with limited RAM:
llm-server-parallel,2
```
3. Use smaller models
3. Use quantized models (Q3_K_M or Q4_K_M)
4. For Mixture of Experts models, adjust CPU MoE threads:
```csv
llm-server-n-moe,4
```
### GPU Issues
If GPU is not detected:
1. Check CUDA installation (NVIDIA)
2. Verify GPU memory (12GB minimum)
3. Set `llm-server-gpu-layers` to 0 for CPU-only mode
## Next Steps
- [Quick Start Guide](./quick-start.md) - Create your first bot
- [First Conversation](./first-conversation.md) - Test your bot
- [Configuration Reference](../chapter-02/gbot.md) - All configuration options
- [BASIC Programming](../chapter-05/basics.md) - Learn the scripting language
- [Deployment Guide](../chapter-06/containers.md) - Production deployment
- [BASIC Programming](../chapter-05/basics.md) - Learn the scripting language

View file

@ -0,0 +1,287 @@
# NVIDIA GPU Setup for LXC Containers
This guide covers setting up NVIDIA GPU passthrough for BotServer running in LXC containers, enabling hardware acceleration for local LLM inference.
## Prerequisites
- NVIDIA GPU (RTX 3060 or better with 12GB+ VRAM recommended)
- NVIDIA drivers installed on the host system
- LXD/LXC installed
- CUDA-capable GPU
## LXD Configuration (Interactive Setup)
When initializing LXD, use these settings:
```bash
sudo lxd init
```
Answer the prompts as follows:
- **Would you like to use LXD clustering?**`no`
- **Do you want to configure a new storage pool?**`no` (will create `/generalbots` later)
- **Would you like to connect to a MAAS server?**`no`
- **Would you like to create a new local network bridge?**`yes`
- **What should the new bridge be called?**`lxdbr0`
- **What IPv4 address should be used?**`auto`
- **What IPv6 address should be used?**`auto`
- **Would you like the LXD server to be available over the network?**`no`
- **Would you like stale cached images to be updated automatically?**`no`
- **Would you like a YAML "lxd init" preseed to be printed?**`no`
### Storage Configuration
- **Storage backend name:**`default`
- **Storage backend driver:**`zfs`
- **Create a new ZFS pool?**`yes`
## NVIDIA GPU Configuration
### On the Host System
Create a GPU profile and attach it to your container:
```bash
# Create GPU profile
lxc profile create gpu
# Add GPU device to profile
lxc profile device add gpu gpu gpu gputype=physical
# Apply GPU profile to your container
lxc profile add gb-system gpu
```
### Inside the Container
Configure NVIDIA driver version pinning and install drivers:
1. **Pin NVIDIA driver versions** to ensure stability:
```bash
cat > /etc/apt/preferences.d/nvidia-drivers << 'EOF'
Package: *nvidia*
Pin: version 560.35.05-1
Pin-Priority: 1001
Package: cuda-drivers*
Pin: version 560.35.05-1
Pin-Priority: 1001
Package: libcuda*
Pin: version 560.35.05-1
Pin-Priority: 1001
Package: libxnvctrl*
Pin: version 560.35.05-1
Pin-Priority: 1001
Package: libnv*
Pin: version 560.35.05-1
Pin-Priority: 1001
EOF
```
2. **Install NVIDIA drivers and CUDA toolkit:**
```bash
# Update package lists
apt update
# Install NVIDIA driver and nvidia-smi
apt install -y nvidia-driver nvidia-smi
# Add CUDA repository
wget https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb
dpkg -i cuda-keyring_1.1-1_all.deb
# Install CUDA toolkit
apt-get update
apt-get -y install cuda-toolkit-12-8
apt-get install -y cuda-drivers
```
## Verify GPU Access
After installation, verify GPU is accessible:
```bash
# Check GPU is visible
nvidia-smi
# Should show your GPU with driver version 560.35.05
```
## Configure BotServer for GPU
Update your bot's `config.csv` to use GPU acceleration:
```csv
name,value
llm-server-gpu-layers,35
```
The number of layers depends on your GPU memory:
- **RTX 3060 (12GB):** 20-35 layers
- **RTX 3070 (8GB):** 15-25 layers
- **RTX 4070 (12GB):** 30-40 layers
- **RTX 4090 (24GB):** 50-99 layers
## Troubleshooting
### GPU Not Detected
If `nvidia-smi` doesn't show the GPU:
1. Check host GPU drivers:
```bash
# On host
nvidia-smi
lxc config device list gb-system
```
2. Verify GPU passthrough:
```bash
# Inside container
ls -la /dev/nvidia*
```
3. Check kernel modules:
```bash
lsmod | grep nvidia
```
### Driver Version Mismatch
If you encounter driver version conflicts:
1. Ensure host and container use the same driver version
2. Remove the version pinning file and install matching drivers:
```bash
rm /etc/apt/preferences.d/nvidia-drivers
apt update
apt install nvidia-driver-560
```
### CUDA Library Issues
If CUDA libraries aren't found:
```bash
# Add CUDA to library path
echo '/usr/local/cuda/lib64' >> /etc/ld.so.conf.d/cuda.conf
ldconfig
# Add to PATH
echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc
source ~/.bashrc
```
## Custom llama.cpp Compilation
If you need custom CPU/GPU optimizations or specific hardware support, compile llama.cpp from source:
### Prerequisites
```bash
sudo apt update
sudo apt install build-essential cmake git
```
### Compilation Steps
```bash
# Clone llama.cpp repository
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
# Create build directory
mkdir build
cd build
# Configure with CUDA support
cmake .. -DLLAMA_CUDA=ON -DLLAMA_CURL=OFF
# Compile using all available cores
make -j$(nproc)
```
### Compilation Options
For different hardware configurations:
```bash
# CPU-only build (no GPU)
cmake .. -DLLAMA_CURL=OFF
# CUDA with specific compute capability
cmake .. -DLLAMA_CUDA=ON -DLLAMA_CUDA_FORCE_COMPUTE=75
# ROCm for AMD GPUs
cmake .. -DLLAMA_HIPBLAS=ON
# Metal for Apple Silicon
cmake .. -DLLAMA_METAL=ON
# AVX2 optimizations for modern CPUs
cmake .. -DLLAMA_AVX2=ON
# F16C for half-precision support
cmake .. -DLLAMA_F16C=ON
```
### After Compilation
```bash
# Copy compiled binary to BotServer
cp bin/llama-server /path/to/botserver-stack/bin/llm/
# Update config.csv to use custom build
llm-server-path,/path/to/botserver-stack/bin/llm/
```
### Benefits of Custom Compilation
- **Hardware-specific optimizations** for your exact CPU/GPU
- **Custom CUDA compute capabilities** for newer GPUs
- **AVX/AVX2/AVX512** instructions for faster CPU inference
- **Reduced binary size** by excluding unused features
- **Support for experimental features** not in releases
## Performance Optimization
### Memory Settings
For optimal LLM performance with GPU:
```csv
name,value
llm-server-gpu-layers,35
llm-server-mlock,true
llm-server-no-mmap,false
llm-server-ctx-size,4096
```
### Multiple GPUs
For systems with multiple GPUs, specify which GPU to use:
```bash
# List available GPUs
lxc profile device add gpu gpu0 gpu gputype=physical id=0
lxc profile device add gpu gpu1 gpu gputype=physical id=1
```
## Benefits of GPU Acceleration
With GPU acceleration enabled:
- **5-10x faster** inference compared to CPU
- **Higher context sizes** possible (8K-32K tokens)
- **Real-time responses** even with large models
- **Lower CPU usage** for other tasks
- **Support for larger models** (13B, 30B parameters)
## Next Steps
- [Installation Guide](./installation.md) - Complete BotServer setup
- [Quick Start](./quick-start.md) - Create your first bot
- [Configuration Reference](../chapter-02/gbot.md) - All GPU-related parameters

View file

@ -58,7 +58,6 @@ BotServer uses a modular architecture with these core components:
- Web chat interface
- WhatsApp Business API
- Microsoft Teams
- Slack
- Email
- SMS (via providers)
@ -73,34 +72,27 @@ BotServer uses a modular architecture with these core components:
### Minimum Requirements
- 4GB RAM
- 2 CPU cores
- 1 CPU core (development/testing)
- 10GB disk space
- Linux, macOS, or Windows
### Recommended for Production
- 16GB RAM
- 4+ CPU cores
- 2+ CPU cores
- 100GB SSD storage
- Linux server (Ubuntu/Debian preferred)
- GPU: RTX 3060 or better (12GB VRAM minimum) for local LLM hosting
## Configuration
BotServer uses only two environment variables:
Bot configuration is managed through `config.csv` files with parameters like:
- `server_host`, `server_port` - Web server settings
- `llm-url`, `llm-model` - LLM configuration
- `email-from`, `email-server` - Email settings
- `theme-color1`, `theme-color2`, `theme-title`, `theme-logo` - UI customization
- `prompt-history`, `prompt-compact` - Conversation settings
```bash
# Database connection
DATABASE_URL=postgres://user:pass@localhost:5432/botserver
# Object storage
DRIVE_SERVER=http://localhost:9000
DRIVE_ACCESSKEY=accesskey
DRIVE_SECRET=secretkey
```
All other configuration is managed through:
- `config.csv` files in bot packages
- Database configuration tables
- Command-line arguments
See actual config.csv files in bot packages for available parameters.
## Bot Package Structure
@ -127,11 +119,11 @@ Single instance serving multiple bots:
- Shared resources
- Best for small to medium deployments
### Containerized
Using Docker/Kubernetes:
- Isolated environments
- Easy scaling
- Cloud-native deployment
### LXC Containers
Using Linux containers for isolation:
- Lightweight virtualization
- Resource isolation
- Easy management
### Embedded
Integrated into existing applications:
@ -148,15 +140,19 @@ Integrated into existing applications:
```
2. **Bootstrap Components**
```bash
# Automatic setup
botserver bootstrap
```
The bootstrap automatically downloads everything to `botserver-stack/`:
- Database binaries
- Object storage server
- Cache server
- LLM runtime
- Required dependencies
3. **Deploy a Bot**
Create a new bucket in object storage:
```bash
# Copy template
cp -r templates/default.gbai work/mybot.gbai
# Each bot gets its own storage bucket
# Bots are deployed to the drive, not work folder
# The work/ folder is internal (see .gbapp chapter)
```
4. **Access Web Interface**
@ -190,15 +186,6 @@ Integrated into existing applications:
- Medication reminders
- Patient education
## Performance Characteristics
- **Concurrent Users**: 1000+ per instance
- **Message Throughput**: 100+ msg/sec
- **Response Time**: < 200ms (without LLM)
- **LLM Response**: 1-5 seconds (varies by model)
- **Memory Usage**: 500MB base + 100MB per active bot
- **Storage**: 1GB per 100,000 conversations
## Security Features
- Authentication via directory service
@ -232,16 +219,6 @@ Integrated into existing applications:
## Extensibility
### Custom Keywords
Add new BASIC keywords in Rust:
```rust
pub fn my_keyword(engine: &mut Engine) {
engine.register_fn("MY_KEYWORD", |param: String| {
// Implementation
});
}
```
### Channel Adapters
Implement new messaging channels:
- WebSocket protocol
@ -265,29 +242,13 @@ Implement new messaging channels:
- Example bots in `templates/`
- Test suites
- Migration tools
- Performance benchmarks
### Contributing
- Open source (MIT License)
- Open source (AGPL - GNU Affero General Public License)
- GitHub repository
- Issue tracking
- Pull requests welcome
## Roadmap
### Current Focus
- Stability and performance
- Documentation completeness
- Test coverage
- Community building
### Future Plans
- Advanced LLM features
- More channel integrations
- Visual dialog editor
- Analytics dashboard
- Marketplace for bot packages
## Summary
BotServer provides a complete platform for building conversational AI applications. With its simple BASIC scripting, automatic setup, and enterprise features, it bridges the gap between simple chatbots and complex AI systems.

View file

@ -21,7 +21,7 @@ You'll see:
✓ Database created
✓ Schema initialized
✓ Credentials saved to .env
📦 Installing MinIO...
📦 Installing Drive...
✓ Object storage ready
✓ Buckets created
📦 Installing Valkey...
@ -47,16 +47,28 @@ Start chatting with your bot!
The **automatic bootstrap** process:
1. ✅ Detected your OS (Linux/macOS/Windows)
2. ✅ Installed PostgreSQL database
3. ✅ Installed MinIO object storage
4. ✅ Installed Valkey cache
5. ✅ Generated secure credentials → `.env`
2. ✅ Downloaded PostgreSQL database to botserver-stack/
3. ✅ Downloaded drive (S3-compatible storage) to botserver-stack/
4. ✅ Downloaded Valkey cache to botserver-stack/
5. ✅ Generated secure credentials → `.env` (from blank environment)
6. ✅ Created database schema
7. ✅ Created default bots from `templates/`
7. ✅ Deployed default bots to object storage
8. ✅ Started web server on port 8080
**Zero manual configuration required!**
### Using Existing Services
If you already have PostgreSQL or drive storage running, update `.env`:
```bash
# Point to your existing services
DATABASE_URL=postgres://myuser:mypass@myhost:5432/mydb
DRIVE_SERVER=http://my-drive:9000
DRIVE_ACCESSKEY=my-access-key
DRIVE_SECRET=my-secret-key
```
---
## Create Your First Tool
@ -99,27 +111,27 @@ The bot automatically:
---
## Container Mode (LXC)
## Container Deployment (LXC)
BotServer uses **LXC** (Linux Containers) for containerized deployment:
For production isolation, BotServer supports **LXC** (Linux Containers):
```bash
# Force container mode
./botserver --container
# Create container
lxc-create -n botserver -t download -- -d ubuntu -r jammy -a amd64
# Components run in isolated LXC containers
# - PostgreSQL in {tenant}-tables container
# - MinIO in {tenant}-drive container
# - Valkey in {tenant}-cache container
# Start and attach
lxc-start -n botserver
lxc-attach -n botserver
# Install BotServer inside container
./botserver
```
**Benefits**:
- ✅ Clean isolation - system-level containers
- ✅ Easy cleanup - `lxc delete {container}`
- ✅ No system pollution - everything in containers
- ✅ Lightweight - more efficient than VMs
**Requires**: LXC/LXD installed (`sudo snap install lxd`)
- ✅ Process isolation
- ✅ Resource control
- ✅ Easy management
- ✅ Lightweight virtualization
---
@ -140,10 +152,10 @@ Same automatic bootstrap process!
After installation, add more features:
```bash
./botserver install email # Stalwart email server
./botserver install directory # Zitadel identity provider
./botserver install email # Email server
./botserver install directory # Identity provider
./botserver install llm # Local LLM server (offline mode)
./botserver install meeting # LiveKit video conferencing
./botserver install meeting # Video conferencing
```
---
@ -165,7 +177,31 @@ mybot.gbai/
└── custom.css
```
Save to `templates/mybot.gbai/` and restart - bot created automatically!
Deploy new bots by uploading to object storage (creates a new bucket), not the local filesystem. The `work/` folder is for internal use only.
### Local Development with Auto-Sync
Edit bot files locally and sync automatically to drive storage:
**Free S3 Sync Tools:**
- **Cyberduck** - GUI file browser (Windows/Mac/Linux)
- **rclone** - Command-line sync (All platforms)
- **WinSCP** - File manager with S3 (Windows)
- **S3 Browser** - Freeware S3 client (Windows)
**Quick Setup with rclone:**
```bash
# Configure for drive storage
rclone config # Follow prompts for S3-compatible storage
# Auto-sync local edits to bucket
rclone sync ./mybot.gbai drive:mybot --watch
```
Now when you:
- Edit `.csv` → Bot config reloads automatically
- Edit `.bas` → Scripts compile automatically
- Add docs to `.gbkb/` → Knowledge base updates
---
@ -203,9 +239,10 @@ The LLM handles ALL conversation logic automatically!
## Configuration (Optional)
Bootstrap creates `.env` automatically:
Bootstrap automatically generates `.env` from a blank environment with secure random credentials:
```env
# Auto-generated during bootstrap
DATABASE_URL=postgres://gbuser:RANDOM_PASS@localhost:5432/botserver
DRIVE_SERVER=http://localhost:9000
DRIVE_ACCESSKEY=GENERATED_KEY
@ -239,16 +276,17 @@ server_port,3000
```bash
# Remove everything and start fresh
rm -rf /opt/gbo # Linux/macOS
./botserver
rm -rf botserver-stack/
rm .env
./botserver # Will regenerate everything
```
### Check component status
```bash
./botserver status tables # PostgreSQL
./botserver status drive # MinIO
./botserver status cache # Valkey
./botserver status drive # Drive storage
./botserver status cache # Valkey cache
```
---

View file

@ -1,311 +1 @@
# Sessions
Understanding how BotServer manages conversational sessions is crucial for building effective bots.
## What is a Session?
A session represents a single conversation between a user and a bot. It maintains:
- User identity
- Conversation state
- Context and memory
- Active knowledge bases
- Loaded tools
## Session Lifecycle
### 1. Session Creation
Sessions are created when:
- A user visits the web interface (cookie-based)
- A message arrives from a messaging channel
- An API call includes a new session ID
```basic
' Sessions start automatically when user connects
' The start.bas script runs for each new session
TALK "Welcome! This is a new session."
```
### 2. Session Persistence
Sessions persist:
- **Web**: Via browser cookies (30-day default)
- **WhatsApp**: Phone number as session ID
- **Teams**: User ID from Microsoft Graph
- **API**: Client-provided session token
### 3. Session Termination
Sessions end when:
- User explicitly ends conversation
- Timeout period expires (configurable)
- Server restarts (optional persistence)
- Memory limit reached
## Session Storage
### Database Tables
Sessions use these primary tables:
- `users`: User profiles and authentication
- `user_sessions`: Active session records
- `conversations`: Message history
- `bot_memories`: Persistent bot data
### Memory Management
Each session maintains:
```
Session Memory
├── User Variables (SET/GET)
├── Context Strings (SET CONTEXT)
├── Active KBs (USE KB)
├── Loaded Tools (USE TOOL)
├── Suggestions (ADD SUGGESTION)
└── Temporary Data
```
## Session Variables
### User Variables
```basic
' Set a variable for this session
SET "user_name", "John"
SET "preference", "email"
' Retrieve variables
name = GET "user_name"
TALK "Hello, " + name
```
### Bot Memory
```basic
' Bot memory persists across all sessions
SET BOT MEMORY "company_name", "ACME Corp"
' Available to all users
company = GET BOT MEMORY "company_name"
```
## Session Context
Context provides information to the LLM:
```basic
' Add context for better responses
SET CONTEXT "user_profile" AS "Premium customer since 2020"
SET CONTEXT "preferences" AS "Prefers technical documentation"
' Context is automatically included in LLM prompts
response = LLM "What products should I recommend?"
```
## Multi-Channel Sessions
### Channel Identification
Sessions track their origin channel:
```basic
channel = GET SESSION "channel"
IF channel = "whatsapp" THEN
' WhatsApp-specific features
ADD SUGGESTION "Call Support" AS "phone"
ELSE IF channel = "web" THEN
' Web-specific features
SHOW IMAGE "dashboard.png"
END IF
```
### Channel-Specific Data
Each channel provides different session data:
| Channel | Session ID | User Info | Metadata |
|---------|------------|-----------|----------|
| Web | Cookie UUID | IP, Browser | Page URL |
| WhatsApp | Phone Number | Name, Profile | Message Type |
| Teams | User ID | Email, Tenant | Organization |
| Email | Email Address | Name | Subject |
## Session Security
### Authentication States
Sessions can be:
- **Anonymous**: No authentication required
- **Authenticated**: User logged in via directory service
- **Elevated**: Additional verification completed
```basic
auth_level = GET SESSION "auth_level"
IF auth_level <> "authenticated" THEN
TALK "Please log in to continue"
RUN "auth.bas"
END IF
```
### Session Tokens
Secure token generation:
- UUID v4 for session IDs
- Signed JWTs for API access
- Refresh tokens for long-lived sessions
## Session Limits
### Resource Constraints
| Resource | Default Limit | Configurable |
|----------|--------------|--------------|
| Memory per session | 10MB | Yes |
| Context size | 4096 tokens | Yes |
| Active KBs | 10 | Yes |
| Variables | 100 | Yes |
| Message history | 50 messages | Yes |
### Concurrent Sessions
- Server supports 1000+ concurrent sessions
- Database connection pooling
- Redis caching for performance
- Automatic cleanup of stale sessions
## Session Recovery
### Automatic Recovery
If a session disconnects:
1. State preserved for timeout period
2. User can reconnect with same session ID
3. Conversation continues from last point
```basic
last_message = GET SESSION "last_interaction"
IF last_message <> "" THEN
TALK "Welcome back! We were discussing: " + last_message
END IF
```
### Manual Save/Restore
```basic
' Save session state
state = SAVE SESSION STATE
SET BOT MEMORY "saved_session_" + user_id, state
' Restore later
saved = GET BOT MEMORY "saved_session_" + user_id
RESTORE SESSION STATE saved
```
## Session Analytics
Track session metrics:
- Duration
- Message count
- User satisfaction
- Completion rate
- Error frequency
```basic
' Log session events
LOG SESSION "milestone", "order_completed"
LOG SESSION "error", "payment_failed"
```
## Best Practices
### 1. Session Initialization
```basic
' start.bas - Initialize every session properly
user_id = GET SESSION "user_id"
IF user_id = "" THEN
' First time user
TALK "Welcome! Let me help you get started."
RUN "onboarding.bas"
ELSE
' Returning user
TALK "Welcome back!"
END IF
```
### 2. Session Cleanup
```basic
' Clean up before session ends
ON SESSION END
CLEAR KB ALL
CLEAR SUGGESTIONS
LOG "Session ended: " + SESSION_ID
END ON
```
### 3. Session Handoff
```basic
' Transfer session to human agent
FUNCTION HandoffToAgent()
agent_id = GET AVAILABLE AGENT
TRANSFER SESSION agent_id
TALK "Connecting you to an agent..."
END FUNCTION
```
### 4. Session Persistence
```basic
' Save important data beyond session
important_data = GET "order_details"
SET BOT MEMORY "user_" + user_id + "_last_order", important_data
```
## Debugging Sessions
### Session Inspection
View session data:
```basic
' Debug session information
DEBUG SHOW SESSION
DEBUG SHOW CONTEXT
DEBUG SHOW VARIABLES
```
### Session Logs
All sessions are logged:
- Start/end timestamps
- Messages exchanged
- Errors encountered
- Performance metrics
## Advanced Session Features
### Session Branching
```basic
' Create sub-session for specific task
sub_session = CREATE SUB SESSION
RUN IN SESSION sub_session, "specialized_task.bas"
MERGE SESSION sub_session
```
### Session Templates
```basic
' Apply template to session
APPLY SESSION TEMPLATE "support_agent"
' Automatically loads KBs, tools, and context
```
### Cross-Session Communication
```basic
' Send message to another session
SEND TO SESSION other_session_id, "Notification: Your order is ready"
```
## Summary
Sessions are the foundation of conversational state in BotServer. They:
- Maintain conversation continuity
- Store user-specific data
- Manage resources efficiently
- Enable multi-channel support
- Provide security boundaries
Understanding sessions helps you build bots that feel natural, remember context, and provide personalized experiences across any channel.
# Understanding Sessions

View file

@ -8,10 +8,10 @@ BotServer uses a template-based package system to organize bot resources. Each b
|-----------|-----------|------|
| Application Interface | `.gbai` | Root directory container for all bot resources |
| Dialog scripts | `.gbdialog` | BASIC-style conversational logic (`.bas` files) |
| Knowledge bases | `.gbkb` | Document collections for semantic search |
| Knowledge bases | `.gbkb` | Document collections for semantic search (each folder is a collection for LLM/vector DB) |
| Bot configuration | `.gbot` | CSV configuration file (`config.csv`) |
| UI themes | `.gbtheme` | CSS/HTML assets for web interface customization |
| File storage | `.gbdrive` | Object storage integration (MinIO/S3) |
| UI themes | `.gbtheme` | Simple CSS theming - just place a `default.css` file |
| File storage | `.gbdrive` | General file storage for bot data (not KB) - used by SEND FILE, GET, SAVE AS |
## How Packages Work
@ -22,7 +22,7 @@ BotServer uses a template-based approach:
1. **Templates Directory**: Bot packages are stored in `templates/` as `.gbai` folders
2. **Auto-Discovery**: During bootstrap, the system scans for `.gbai` directories
3. **Bot Creation**: Each `.gbai` package automatically creates a bot instance
4. **Storage Upload**: Template files are uploaded to MinIO for persistence
4. **Storage Upload**: Template files are uploaded to object storage for persistence
5. **Runtime Loading**: Bots load their resources from storage when serving requests
### Package Structure
@ -35,35 +35,24 @@ botname.gbai/
│ ├── start.bas # Entry point script
│ ├── auth.bas # Authentication flow
│ └── *.bas # Other dialog scripts
├── botname.gbkb/ # Knowledge base
│ ├── collection1/ # Document collection
│ └── collection2/ # Another collection
├── botname.gbkb/ # Knowledge base for LLM
│ ├── collection1/ # Document collection (USE KB "collection1")
│ └── collection2/ # Another collection (USE KB "collection2")
├── botname.gbdrive/ # File storage (not KB)
│ ├── uploads/ # User uploaded files
│ ├── exports/ # Generated files (SAVE AS)
│ └── templates/ # File templates
├── botname.gbot/ # Configuration
│ └── config.csv # Bot parameters
└── botname.gbtheme/ # UI theme (optional)
├── css/
├── html/
└── assets/
└── default.css # Theme CSS (CHANGE THEME "default")
```
## Included Templates
BotServer ships with two example templates:
BotServer includes 21 pre-built templates for various use cases: business bots (CRM, ERP, BI), communication (announcements, WhatsApp), AI tools (search, LLM utilities), and industry-specific solutions (education, legal, e-commerce).
### default.gbai
A minimal bot with basic configuration:
- Includes only `default.gbot/config.csv`
- Suitable starting point for new bots
- Demonstrates core configuration parameters
### announcements.gbai
A complete example bot showcasing all features:
- **Dialogs**: Multiple `.bas` scripts demonstrating conversation flows
- **Knowledge Base**: Three collections (auxiliom, news, toolbix)
- **Configuration**: Full configuration with LLM, email, and database settings
- **Features**: Context management, suggestions, memory retrieval
See [Template Reference](./templates.md) for the complete catalog and detailed descriptions.
## Creating Your Own Package
@ -95,7 +84,7 @@ To create a new bot package:
Development → Bootstrap → Storage → Runtime → Updates
↓ ↓ ↓ ↓ ↓
Edit files Scan .gbai Upload Load from Modify &
in templates folders to MinIO storage restart
in templates folders to drive storage restart
```
### Development Phase
@ -113,9 +102,9 @@ Development → Bootstrap → Storage → Runtime → Updates
### Storage Phase
- Uploads all template files to MinIO (S3-compatible storage)
- Indexes documents into Qdrant vector database
- Stores configuration in PostgreSQL
- Uploads all template files to object storage (drive)
- Indexes documents into vector database
- Stores configuration in database
- Ensures persistence across restarts
### Runtime Phase
@ -123,7 +112,7 @@ Development → Bootstrap → Storage → Runtime → Updates
- Bots load dialogs on-demand from storage
- Configuration is read from database
- Knowledge base queries hit vector database
- Session state maintained in Redis cache
- Session state maintained in cache
### Update Phase
@ -145,10 +134,10 @@ A single BotServer instance can host multiple bots:
After bootstrap, package data is distributed across services:
- **PostgreSQL**: Bot metadata, users, sessions, configuration
- **MinIO/S3**: Template files, uploaded documents, assets
- **Qdrant**: Vector embeddings for semantic search
- **Redis/Valkey**: Session cache, temporary data
- **Database**: Bot metadata, users, sessions, configuration
- **Object Storage (Drive)**: Template files, uploaded documents, assets
- **Vector Database**: Embeddings for semantic search
- **Cache**: Session cache, temporary data
- **File System**: Optional local caching
## Best Practices
@ -182,10 +171,12 @@ After bootstrap, package data is distributed across services:
### Version Control
- Commit entire `.gbai` packages to Git
- Use `.gitignore` for generated files
- Tag releases for production deployments
- Document changes in commit messages
- Packages are versioned in object storage with built-in versioning
- The drive automatically maintains version history
- For larger projects with split BASIC/LLM development teams:
- Use Git to track source changes
- Coordinate between dialog scripting and prompt engineering
- Storage versioning handles production deployments
## Package Component Details
@ -195,18 +186,27 @@ For detailed information about each package type:
- **[.gbdialog Dialogs](./gbdialog.md)** - BASIC scripting and conversation flows
- **[.gbkb Knowledge Base](./gbkb.md)** - Document indexing and semantic search
- **[.gbot Bot Configuration](./gbot.md)** - Configuration parameters and settings
- **[.gbtheme UI Theming](./gbtheme.md)** - Web interface customization
- **[.gbdrive File Storage](./gbdrive.md)** - MinIO/S3 object storage integration
- **[.gbtheme UI Theming](./gbtheme.md)** - Simple CSS theming
- **[.gbdrive File Storage](./gbdrive.md)** - General file storage (not KB)
## Migration from Other Platforms
If you're migrating from other bot platforms:
When migrating from traditional bot platforms, the key is to **let go** of complex logic:
- **Dialog Flows**: Convert to BASIC scripts in `.gbdialog/`
- **Intents/Entities**: Use LLM-based understanding instead
- **Knowledge Base**: Import documents into `.gbkb/` collections
- **Configuration**: Map settings to `config.csv` parameters
- **Custom Code**: Implement as Rust keywords or external tools
- **Dialog Flows**: Use minimal BASIC scripts - let the LLM handle conversation flow
- **Intents/Entities**: Remove entirely - LLM understands naturally
- **State Machines**: Eliminate - LLM maintains context automatically
- **Knowledge Base**: Simply drop documents into `.gbkb/` folders
- **Complex Rules**: Replace with LLM intelligence
The migration philosophy is to "open hand" (abrir mão) - release control and trust the LLM. Instead of converting every dialog branch and condition, use the minimum BASIC needed for tools and let the LLM do the heavy lifting. This results in simpler, more maintainable, and more natural conversations.
Example: Instead of 100 lines of intent matching and routing, just:
```basic
' Let LLM understand and respond naturally
answer = LLM "Help the user with their request"
TALK answer
```
## Troubleshooting
@ -227,7 +227,7 @@ If you're migrating from other bot platforms:
### Knowledge Base Not Indexed
- Ensure `.gbkb/` contains subdirectories with documents
- Check Qdrant is running and accessible
- Check vector database is running and accessible
- Verify embedding model is configured
- Review indexing logs for errors

View file

@ -28,7 +28,7 @@ my-bot.gbai/
## Included Templates
BotServer includes two template `.gbai` packages:
BotServer includes 21 template `.gbai` packages in the `/templates` directory:
### default.gbai
@ -55,7 +55,7 @@ A feature-rich example bot:
Contains BASIC-like scripts (`.bas` files) that define conversation logic:
- Simple English-like syntax
- Custom keywords: `TALK`, `HEAR`, `LLM`, `GET_BOT_MEMORY`, `SET_CONTEXT`
- Custom keywords: `TALK`, `HEAR`, `LLM`, `GET BOT MEMORY`, `SET CONTEXT`
- Control flow and variables
- Tool integration
@ -104,9 +104,9 @@ During the Auto Bootstrap process:
- Folder name `default.gbai` → Bot name "Default"
- Folder name `announcements.gbai` → Bot name "Announcements"
3. **Configuration Loading**: Bot configuration from `.gbot/config.csv` is loaded
4. **Template Upload**: All template files are uploaded to MinIO storage
4. **Template Upload**: All template files are uploaded to object storage (drive)
5. **Dialog Loading**: BASIC scripts from `.gbdialog` are loaded and ready to execute
6. **KB Indexing**: Documents from `.gbkb` are indexed into Qdrant vector database
6. **KB Indexing**: Documents from `.gbkb` are indexed into vector database
## Creating Custom .gbai Packages
@ -152,7 +152,7 @@ To create a custom bot:
1. **Development**: Edit files in `templates/your-bot.gbai/`
2. **Bootstrap**: System creates bot from template
3. **Storage**: Files uploaded to MinIO for persistence
3. **Storage**: Files uploaded to object storage for persistence
4. **Runtime**: Bot loads dialogs and configuration from storage
5. **Updates**: Modify template files and restart to apply changes
@ -167,10 +167,10 @@ A single BotServer instance can host multiple bots:
## Package Storage
After bootstrap, packages are stored in:
- **MinIO/S3**: Template files and assets
- **PostgreSQL**: Bot metadata and configuration
- **Qdrant**: Vector embeddings from knowledge bases
- **Redis**: Session and cache data
- **Object Storage**: Template files and assets
- **Database**: Bot metadata and configuration
- **Vector Database**: Embeddings from knowledge bases
- **Cache**: Session and cache data
## Naming Conventions

View file

@ -5,63 +5,252 @@ The `.gbdialog` package contains BASIC scripts that define conversation flows, t
## What is .gbdialog?
`.gbdialog` files are written in a specialized BASIC dialect that controls:
- Conversation flow and logic
- Tool calls and integrations
- User input processing
- Context management
- Response generation
- Tool execution and integrations
- LLM prompting and context
- Knowledge base activation
- Session and memory management
- External API calls
## Basic Structure
## Modern Approach: Let the LLM Work
### Minimal BASIC Philosophy
A typical `.gbdialog` script contains:
Instead of complex logic, use the LLM's natural understanding:
```basic
REM This is a comment
TALK "Hello! How can I help you today?"
' Example from announcements.gbai/update-summary.bas
' Generate summaries from documents
let text = GET "announcements.gbkb/news/news.pdf"
let resume = LLM "In a few words, resume this: " + text
SET BOT MEMORY "resume", resume
HEAR user_input
IF user_input = "help" THEN
TALK "I can help you with various tasks..."
ELSE
LLM user_input
END IF
' Example from law.gbai/case.bas
' Load context and let LLM answer questions
text = GET "case-" + cod + ".pdf"
text = "Based on this document, answer the person's questions:\n\n" + text
SET CONTEXT text
TALK "Case loaded. You can ask me anything about the case."
```
## Key Components
### 1. Control Flow
- `IF/THEN/ELSE/END IF` for conditional logic
- `FOR EACH/IN/NEXT` for loops
- `EXIT FOR` to break loops
### 1. LLM Integration
```basic
' Direct LLM usage for natural conversation
response = LLM "Help the user with their question"
TALK response
### 2. User Interaction
- `HEAR variable` to get user input
- `TALK message` to send responses
- `WAIT seconds` to pause execution
' Context-aware responses
SET CONTEXT "user_type" AS "premium customer"
answer = LLM "Provide personalized recommendations"
TALK answer
```
### 3. Data Manipulation
- `SET variable = value` for assignment
- `GET url` to fetch external data
- `FIND table, filter` to query databases
### 2. Tool Execution
```basic
' Define tools with parameters
PARAM name AS string LIKE "John Smith" DESCRIPTION "Customer name"
PARAM email AS string LIKE "john@example.com" DESCRIPTION "Email"
### 4. AI Integration
- `LLM prompt` for AI-generated responses
- `USE_TOOL tool_name` to enable functionality
- `USE_KB collection` to use knowledge bases
' LLM automatically knows when to call this
SAVE "customers.csv", name, email
TALK "Registration complete!"
```
## Script Execution
### 3. Knowledge Base Usage
```basic
' Activate knowledge base collections
USE KB "products"
USE KB "policies"
Dialog scripts run in a sandboxed environment with:
- Access to session context and variables
- Ability to call external tools and APIs
- Integration with knowledge bases
- LLM generation capabilities
' LLM searches these automatically when answering
answer = LLM "Answer based on our product catalog and policies"
TALK answer
```
### 4. Session Management
```basic
' Store session data
SET "user_name", name
SET "preferences", "email notifications"
' Retrieve later
saved_name = GET "user_name"
TALK "Welcome back, " + saved_name
```
## Script Structure
### Entry Point: start.bas
Every bot needs a `start.bas` file:
```basic
' Minimal start script - let LLM handle everything
USE KB "company_docs"
response = LLM "Welcome the user and offer assistance"
TALK response
```
### Tool Definitions
Create separate `.bas` files for each tool:
```basic
' enrollment.bas - The LLM knows when to use this
PARAM student_name AS string
PARAM course AS string
DESCRIPTION "Enrolls a student in a course"
SAVE "enrollments.csv", student_name, course, NOW()
TALK "Enrolled successfully!"
```
## Best Practices
### 1. Minimal Logic
```basic
' Good - Let LLM handle the conversation
answer = LLM "Process the user's request appropriately"
TALK answer
' Avoid - Don't micromanage the flow
' IF user_says_this THEN do_that...
```
### 2. Clear Tool Descriptions
```basic
DESCRIPTION "This tool books appointments for customers"
' The LLM uses this description to know when to call the tool
```
### 3. Context Over Conditions
```basic
' Provide context, not rules
SET CONTEXT "business_hours" AS "9AM-5PM weekdays"
response = LLM "Inform about availability"
' LLM naturally understands to mention hours when relevant
```
### 4. Trust the LLM
```basic
' Simple prompt, sophisticated behavior
answer = LLM "Be a helpful customer service agent"
' LLM handles greetings, questions, complaints naturally
```
## Common Patterns
### Document Summarization (from announcements.gbai)
```basic
' Schedule automatic updates
SET SCHEDULE "59 * * * *"
' Fetch and summarize documents
let text = GET "announcements.gbkb/news/news.pdf"
let resume = LLM "In a few words, resume this: " + text
SET BOT MEMORY "resume", resume
```
### Interactive Case Analysis (from law.gbai)
```basic
' Ask for case number
TALK "What is the case number?"
HEAR cod
' Load case document
text = GET "case-" + cod + ".pdf"
IF text THEN
' Set context for LLM to use
text = "Based on this document, answer the person's questions:\n\n" + text
SET CONTEXT text
TALK "Case loaded. Ask me anything about it."
ELSE
TALK "Case not found, please try again."
END IF
```
### Tool Definition Pattern
```basic
' Tool parameters (auto-discovered by LLM)
PARAM name AS string
PARAM email AS string
DESCRIPTION "Enrollment tool"
' Tool logic (called when LLM decides)
SAVE "enrollments.csv", name, email
TALK "Successfully enrolled " + name
```
### Multi-Collection Search
```basic
USE KB "products"
USE KB "reviews"
USE KB "specifications"
answer = LLM "Answer product questions comprehensively"
TALK answer
```
## Advanced Features
### Memory Management
```basic
SET BOT MEMORY "company_policy", policy_text
' Available across all sessions
retrieved = GET BOT MEMORY "company_policy"
```
### External APIs
```basic
result = GET "https://api.example.com/data"
response = LLM "Interpret this data: " + result
TALK response
```
### Suggestions
```basic
ADD SUGGESTION "Schedule Meeting" AS "schedule"
ADD SUGGESTION "View Products" AS "products"
' UI shows these as quick actions
```
## Error Handling
The system provides built-in error handling:
- Syntax errors are caught during compilation
- Runtime errors log details but don't crash the bot
- Timeouts prevent infinite loops
- Resource limits prevent abuse
The system handles errors gracefully:
- Syntax errors caught at compile time
- Runtime errors logged but don't crash
- LLM provides fallback responses
- Timeouts prevent infinite operations
## Script Execution
Scripts run in a sandboxed environment with:
- Access to session state
- LLM generation capabilities
- Knowledge base search
- Tool execution rights
- External API access (configured)
## Migration from Traditional Bots
### Old Way (Complex Logic)
```basic
' DON'T DO THIS - 1990s style
' IF INSTR(user_input, "order") > 0 THEN
' IF INSTR(user_input, "status") > 0 THEN
' TALK "Checking order status..."
' ELSE IF INSTR(user_input, "new") > 0 THEN
' TALK "Creating new order..."
' END IF
' END IF
```
### New Way (LLM Intelligence)
```basic
' DO THIS - Let LLM understand naturally
response = LLM "Handle the customer's order request"
TALK response
' LLM understands context and intent automatically
```
The key is to **trust the LLM** and write less code for more intelligent behavior.

View file

@ -1,6 +1,6 @@
# .gbdrive File Storage
The `.gbdrive` system manages file storage and retrieval using MinIO (S3-compatible object storage).
The `.gbdrive` system manages file storage and retrieval using object storage (S3-compatible drive).
## What is .gbdrive?
@ -33,7 +33,7 @@ org-prefixbot-name.gbai/
### Uploading Files
```basic
REM Files can be uploaded via API or interface
REM They are stored in the bot's MinIO bucket
REM They are stored in the bot's storage bucket
```
### Retrieving Files
@ -78,8 +78,7 @@ Files have different access levels:
## Storage Backends
Supported storage options:
- **MinIO** (default): Self-hosted S3-compatible
- **Object Storage** (default): Self-hosted S3-compatible drive
- **AWS S3**: Cloud object storage
- **Local filesystem**: Development and testing
- **Hybrid**: Multiple backends with fallback

View file

@ -45,21 +45,21 @@ Each document is processed into vector embeddings using:
### Creating Collections
```basic
USE_KB "company-policies"
ADD_WEBSITE "https://company.com/docs"
USE KB "company-policies"
ADD WEBSITE "https://company.com/docs"
```
### Using Collections
```basic
USE_KB "company-policies"
USE KB "company-policies"
LLM "What is the vacation policy?"
```
### Multiple Collections
```basic
USE_KB "policies"
USE_KB "procedures"
USE_KB "faqs"
USE KB "policies"
USE KB "procedures"
USE KB "faqs"
REM All active collections contribute to context
```
@ -74,6 +74,6 @@ The knowledge base provides:
## Integration with Dialogs
Knowledge bases are automatically used when:
- `USE_KB` is called
- `USE KB` is called
- Answer mode is set to use documents
- LLM queries benefit from contextual information

View file

@ -7,8 +7,8 @@ The `.gbot` package contains configuration files that define bot behavior, param
`.gbot` files configure:
- Bot identity and description
- LLM provider settings
- Answer modes and behavior
- Context management
- Bot behavior settings
- Integration parameters
## Configuration Structure
@ -19,7 +19,6 @@ The primary configuration file is `config.csv`:
key,value
bot_name,Customer Support Assistant
bot_description,AI-powered support agent
answer_mode,1
llm_provider,openai
llm_model,gpt-4
temperature,0.7
@ -73,10 +72,10 @@ Settings are applied in this order (later overrides earlier):
Some settings can be changed at runtime:
```basic
REM Change answer mode dynamically
SET_BOT_MEMORY "answer_mode", "2"
REM Store configuration dynamically
SET BOT MEMORY "preferred_style", "detailed"
```
## Bot Memory
The `SET_BOT_MEMORY` and `GET_BOT_MEMORY` keywords allow storing and retrieving bot-specific data that persists across sessions.
The `SET BOT MEMORY` and `GET BOT MEMORY` keywords allow storing and retrieving bot-specific data that persists across sessions.

View file

@ -1,95 +1,184 @@
# .gbtheme UI Theming
The `.gbtheme` package contains user interface customization files for web and other frontend interfaces.
The `.gbtheme` package provides simple CSS-based theming for the bot's web interface.
## What is .gbtheme?
`.gbtheme` defines the visual appearance and user experience:
- CSS stylesheets for styling
- HTML templates for structure
- JavaScript for interactivity
- Assets like images and fonts
`.gbtheme` is a simplified theming system that uses CSS files to customize the bot's appearance. No complex HTML templates or JavaScript required - just CSS.
## Theme Structure
A typical theme package contains:
A theme is simply one or more CSS files in the `.gbtheme` folder:
```
theme-name.gbtheme/
├── web/
│ ├── index.html # Main template
│ ├── chat.html # Chat interface
│ └── login.html # Authentication
├── css/
│ ├── main.css # Primary styles
│ ├── components.css # UI components
│ └── responsive.css # Mobile styles
├── js/
│ ├── app.js # Application logic
│ └── websocket.js # Real-time communication
└── assets/
├── images/
├── fonts/
└── icons/
botname.gbtheme/
└── default.css # Main theme file
└── dark.css # Alternative theme
└── holiday.css # Seasonal theme
```
## Web Interface
## Using Themes
The main web interface consists of:
### Default Theme
### HTML Templates
- `index.html`: Primary application shell
- `chat.html`: Conversation interface
- Component templates for reusable UI
### CSS Styling
- Color schemes and typography
- Layout and responsive design
- Animation and transitions
- Dark/light mode support
### JavaScript
- WebSocket communication
- UI state management
- Event handling
- API integration
## Theme Variables
Themes can use CSS custom properties for easy customization:
Place a `default.css` file in your `.gbtheme` folder:
```css
/* default.css */
:root {
--primary-color: #2563eb;
--secondary-color: #64748b;
--background-color: #ffffff;
--text-color: #1e293b;
--border-radius: 8px;
--spacing-unit: 8px;
--primary-color: #0d2b55;
--secondary-color: #fff9c2;
--background: #ffffff;
--text-color: #333333;
--font-family: 'Inter', sans-serif;
}
.chat-container {
background: var(--background);
color: var(--text-color);
}
.bot-message {
background: var(--primary-color);
color: white;
}
.user-message {
background: var(--secondary-color);
color: var(--text-color);
}
```
## Responsive Design
### Changing Themes Dynamically
Themes should support:
- **Desktop**: Full-featured interface
- **Tablet**: Adapted layout and interactions
- **Mobile**: Touch-optimized experience
- **Accessibility**: Screen reader and keyboard support
Use the BASIC keyword to switch themes at runtime:
## Theme Switching
```basic
' Switch to dark theme
CHANGE THEME "dark"
Multiple themes can be provided:
- Light and dark variants
- High contrast for accessibility
- Brand-specific themes
- User-selected preferences
' Switch back to default
CHANGE THEME "default"
## Customization Points
' Seasonal theme
IF month = 12 THEN
CHANGE THEME "holiday"
END IF
```
Key areas for theme customization:
- Color scheme and branding
- Layout and component arrangement
- Typography and spacing
- Animation and micro-interactions
- Iconography and imagery
## CSS Variables
The bot interface uses CSS custom properties that themes can override:
| Variable | Description | Default |
|----------|-------------|---------|
| `--primary-color` | Main brand color | `#0d2b55` |
| `--secondary-color` | Accent color | `#fff9c2` |
| `--background` | Page background | `#ffffff` |
| `--text-color` | Main text | `#333333` |
| `--font-family` | Typography | `system-ui` |
| `--border-radius` | Element corners | `8px` |
| `--spacing` | Base spacing unit | `16px` |
| `--shadow` | Box shadows | `0 2px 4px rgba(0,0,0,0.1)` |
## Simple Examples
### Minimal Theme
```css
/* minimal.css */
:root {
--primary-color: #000000;
--secondary-color: #ffffff;
}
```
### Corporate Theme
```css
/* corporate.css */
:root {
--primary-color: #1e3a8a;
--secondary-color: #f59e0b;
--background: #f8fafc;
--text-color: #1e293b;
--font-family: 'Roboto', sans-serif;
--border-radius: 4px;
}
```
### Dark Theme
```css
/* dark.css */
:root {
--primary-color: #60a5fa;
--secondary-color: #34d399;
--background: #0f172a;
--text-color: #e2e8f0;
}
body {
background: var(--background);
color: var(--text-color);
}
```
## Best Practices
1. **Keep it simple** - Just override CSS variables
2. **Use one file** - Start with a single `default.css`
3. **Test contrast** - Ensure text is readable
4. **Mobile-first** - Design for small screens
5. **Performance** - Keep file size small
## Theme Switching in Scripts
```basic
' User preference
preference = GET USER "theme_preference"
IF preference <> "" THEN
CHANGE THEME preference
END IF
' Time-based themes
hour = GET TIME "hour"
IF hour >= 18 OR hour < 6 THEN
CHANGE THEME "dark"
ELSE
CHANGE THEME "default"
END IF
```
## Integration with config.csv
You can set the default theme in your bot's configuration:
```csv
name,value
theme,default
theme-color1,#0d2b55
theme-color2,#fff9c2
```
These values are available as CSS variables but the `.css` file takes precedence.
## No Build Process Required
Unlike complex theming systems, `.gbtheme`:
- No webpack or build tools
- No preprocessors needed
- No template engines
- Just plain CSS files
- Hot reload on change
## Migration from Complex Themes
If migrating from a complex theme system:
1. **Extract colors** - Find your brand colors
2. **Create CSS** - Map to CSS variables
3. **Test interface** - Verify appearance
4. **Remove complexity** - Delete unused assets
The bot's default UI handles layout and functionality - themes just customize appearance.

View file

@ -6,10 +6,10 @@ This chapter provides a concise overview of the GeneralBots package types introd
|---------|------|-------------|
| **.gbai** | [gbai.md](gbai.md) | Defines the overall application architecture, metadata, and package hierarchy. |
| **.gbdialog** | [gbdialog.md](gbdialog.md) | Contains BASICstyle dialog scripts that drive conversation flow and tool integration. |
| **.gbdrive** | [gbdrive.md](gbdrive.md) | Manages file storage and retrieval via MinIO (or other S3compatible backends). |
| **.gbdrive** | [gbdrive.md](gbdrive.md) | Manages file storage and retrieval via object storage (S3compatible drive). |
| **.gbkb** | [gbkb.md](gbkb.md) | Handles knowledgebase collections, vector embeddings, and semantic search. |
| **.gbot** | [gbot.md](gbot.md) | Stores bot configuration (CSV) for identity, LLM settings, answer modes, and runtime parameters. |
| **.gbtheme** | [gbtheme.md](gbtheme.md) | Provides UI theming assets: CSS, HTML templates, JavaScript, and static resources. |
| **.gbot** | [gbot.md](gbot.md) | Stores bot configuration (CSV) for identity, LLM settings, and runtime parameters. |
| **.gbtheme** | [gbtheme.md](gbtheme.md) | Simple CSS theming - just place CSS files like default.css for custom styling. |
## How to Use This Overview

View file

@ -1,369 +1,249 @@
# Bot Templates
BotServer comes with pre-built templates for common use cases. Each template is a complete `.gbai` package with dialogs, configurations, and knowledge bases ready to use.
BotServer includes 21 pre-built bot templates for various use cases. Each template is a complete `.gbai` package ready to deploy.
## Available Templates
## Template Overview
| Template | Purpose | Key Features | Use Case |
|----------|---------|--------------|----------|
| **default.gbai** | Minimal starter bot | Basic config only | Simple bots, learning |
| **template.gbai** | Reference implementation | Complete structure example | Creating new templates |
| **announcements.gbai** | Company announcements | Multiple KB collections, auth flows | Internal communications |
| **ai-search.gbai** | AI-powered search | QR generation, PDF samples | Document retrieval |
| **api-client.gbai** | External API integration | Climate API, REST patterns | Third-party services |
| **backup.gbai** | Backup automation | Server backup scripts, scheduling | System administration |
| **bi.gbai** | Business Intelligence | Admin/user roles, data viz | Executive dashboards |
| **broadcast.gbai** | Mass messaging | Recipient management, scheduling | Marketing campaigns |
| **crawler.gbai** | Web indexing | Site crawling, content extraction | Search engines |
| **crm.gbai** | Customer Relations | Sentiment analysis, tracking | Sales & support |
| **edu.gbai** | Education platform | Course management, enrollment | Online learning |
| **erp.gbai** | Enterprise Planning | Process automation, integrations | Resource management |
| **law.gbai** | Legal assistant | Document templates, regulations | Legal departments |
| **llm-server.gbai** | LLM hosting | Model serving, GPU config | AI infrastructure |
| **llm-tools.gbai** | LLM utilities | Prompt engineering, testing | AI development |
| **marketing.gbai** | Marketing automation | Campaign tools, lead generation | Marketing teams |
| **public-apis.gbai** | Public API access | Weather, news, data sources | Information services |
| **reminder.gbai** | Task reminders | Scheduling, notifications | Personal assistants |
| **store.gbai** | E-commerce | Product catalog, orders | Online stores |
| **talk-to-data.gbai** | Natural language queries | SQL generation, data viz | Data exploration |
| **whatsapp.gbai** | WhatsApp Business | Meta API, media handling | Mobile messaging |
## Template Structure
All templates follow this standard directory layout:
```
template-name.gbai/
├── template-name.gbdialog/ # BASIC dialog scripts
│ ├── start.bas # Entry point (required)
│ └── *.bas # Tool scripts (auto-discovered)
├── template-name.gbkb/ # Knowledge base collections
│ ├── collection1/ # Documents for USE KB "collection1"
│ └── collection2/ # Documents for USE KB "collection2"
├── template-name.gbdrive/ # File storage (not KB)
│ ├── uploads/ # User uploaded files
│ └── exports/ # Generated files
├── template-name.gbot/ # Configuration
│ └── config.csv # Bot parameters
└── template-name.gbtheme/ # UI theme (optional)
└── default.css # Theme CSS
```
## Quick Start Guide
### 1. Choose a Template
Select based on your needs:
- **Simple chat**: Use `default.gbai`
- **Business app**: Choose `crm.gbai`, `bi.gbai`, or `erp.gbai`
- **AI features**: Pick `ai-search.gbai` or `llm-tools.gbai`
- **Communication**: Select `broadcast.gbai` or `whatsapp.gbai`
### 2. Deploy the Template
```bash
# Templates are auto-deployed during bootstrap
# Access at: http://localhost:8080/template-name
```
### 3. Customize Configuration
Edit `template-name.gbot/config.csv`:
```csv
name,value
bot-name,My Custom Bot
welcome-message,Hello! How can I help?
llm-model,gpt-4
temperature,0.7
```
### 4. Add Knowledge Base
Place documents in `.gbkb` folders:
- Each folder becomes a collection
- Use `USE KB "folder-name"` in scripts
- Documents are automatically indexed
### 5. Create Tools (Optional)
Add `.bas` files to `.gbdialog`:
- Each file becomes a tool
- Auto-discovered by the system
- Called automatically by LLM when needed
## Template Details
### Core Templates
#### default.gbai
The foundation template that all bots inherit from. Contains:
- Basic conversation handling
- Session management
- Error handling
- Standard responses
- Core dialog flows
- **Files**: Minimal configuration only
- **Best for**: Learning, simple bots
- **Customization**: Start from scratch
#### template.gbai
A minimal starting point for custom bots with:
- Skeleton structure
- Basic configuration
- Example dialogs
- Placeholder knowledge base
- **Files**: Complete example structure
- **Best for**: Reference implementation
- **Customization**: Copy and modify
### Business Templates
#### crm.gbai
Customer Relationship Management bot featuring:
- Contact management
- Lead tracking
- Customer inquiries
- Follow-up scheduling
- Sales pipeline integration
- Customer data lookup
#### erp.gbai
Enterprise Resource Planning assistant with:
- Inventory queries
- Order processing
- Supply chain info
- Resource allocation
- Business metrics
- Report generation
#### bi.gbai
Business Intelligence bot providing:
- Data analysis
- Report generation
- Dashboard queries
- KPI tracking
- Trend analysis
- Executive summaries
#### store.gbai
E-commerce assistant offering:
- Product catalog
- Order status
- Shopping cart help
- Payment processing
- Shipping information
- Return handling
### Communication Templates
### Business Applications
#### announcements.gbai
Broadcast messaging system for:
- Company announcements
- News distribution
- Event notifications
- Alert broadcasting
- Multi-channel delivery
- Scheduled messages
- **Files**: `auth.bas`, `start.bas`, multiple KB collections
- **Collections**: auxiliom, news, toolbix
- **Features**: Authentication, summaries
#### broadcast.gbai
Mass communication bot with:
- Bulk messaging
- Audience segmentation
- Campaign management
- Delivery tracking
- Response collection
- Analytics reporting
#### bi.gbai
- **Files**: `bi-admin.bas`, `bi-user.bas`
- **Features**: Role separation, dashboards
- **Data**: Report generation
#### whatsapp.gbai
WhatsApp-optimized bot featuring:
- WhatsApp Business API integration
- Media handling
- Quick replies
- List messages
- Location sharing
- Contact cards
#### crm.gbai
- **Files**: `analyze-customer-sentiment.bas`, `check.bas`
- **Features**: Sentiment analysis
- **Data**: Customer tracking
#### reminder.gbai
Automated reminder system for:
- Task reminders
- Appointment notifications
- Deadline alerts
- Recurring reminders
- Calendar integration
- Follow-up scheduling
#### store.gbai
- **Features**: Product catalog, order processing
- **Integration**: E-commerce workflows
### AI & Automation Templates
### AI & Search
#### ai-search.gbai
Advanced search assistant with:
- Semantic search
- Multi-source queries
- Result ranking
- Context understanding
- Query refinement
- Search analytics
#### llm-server.gbai
LLM gateway bot providing:
- Model selection
- Prompt management
- Token optimization
- Response caching
- Rate limiting
- Cost tracking
#### llm-tools.gbai
AI tools collection featuring:
- Text generation
- Summarization
- Translation
- Code generation
- Image description
- Sentiment analysis
#### crawler.gbai
Web scraping bot with:
- Site crawling
- Data extraction
- Content indexing
- Change monitoring
- Structured data parsing
- API integration
- **Files**: `qr.bas`, PDF samples
- **Features**: QR codes, document search
- **Data**: Sample PDFs included
#### talk-to-data.gbai
Data conversation interface offering:
- Natural language queries
- Database access
- Data visualization
- Export capabilities
- Statistical analysis
- Report generation
- **Features**: Natural language to SQL
- **Integration**: Database connections
- **Output**: Data visualization
### Industry Templates
### Communication
#### edu.gbai
Education assistant providing:
- Course information
- Student support
- Assignment help
- Schedule queries
- Resource access
- Grade lookup
#### broadcast.gbai
- **Files**: `broadcast.bas`
- **Features**: Mass messaging
- **Scheduling**: Message campaigns
#### law.gbai
Legal information bot with:
- Legal term definitions
- Document templates
- Case lookup
- Regulation queries
- Compliance checking
- Disclaimer management
#### whatsapp.gbai
- **Config**: Meta Challenge parameter
- **Features**: WhatsApp API integration
- **Media**: Image/video support
#### marketing.gbai
Marketing automation bot featuring:
- Lead generation
- Campaign management
- Content distribution
- Social media integration
- Analytics tracking
- A/B testing
### Integration Templates
### Development Tools
#### api-client.gbai
REST API integration bot with:
- API endpoint management
- Authentication handling
- Request formatting
- Response parsing
- Error handling
- Rate limiting
- **Files**: `climate.vbs`, `msft-partner-center.bas`
- **Examples**: REST API patterns
- **Integration**: External services
#### public-apis.gbai
Public API aggregator providing:
- Weather information
- News feeds
- Stock prices
- Currency conversion
- Maps/directions
- Public data access
#### backup.gbai
Backup management bot offering:
- Scheduled backups
- Data archiving
- Restore operations
- Backup verification
- Storage management
- Disaster recovery
## Using Templates
### Quick Start
1. **Copy template to your workspace**:
```bash
cp -r templates/crm.gbai mybot.gbai
```
2. **Customize configuration**:
```bash
cd mybot.gbai/mybot.gbot
vim config.csv
```
3. **Modify dialogs**:
```bash
cd ../mybot.gbdialog
vim start.bas
```
4. **Add knowledge base**:
```bash
cd ../mybot.gbkb
# Add your documents
```
### Template Structure
Every template follows this structure:
```
template-name.gbai/
├── template-name.gbdialog/
│ ├── start.bas # Entry point
│ ├── menu.bas # Menu system
│ └── tools/ # Tool definitions
├── template-name.gbot/
│ └── config.csv # Configuration
├── template-name.gbkb/
│ ├── docs/ # Documentation
│ └── data/ # Reference data
└── template-name.gbtheme/
└── style.css # Optional theming
```
## Customization Guide
### Extending Templates
Templates are designed to be extended:
1. **Inherit from template**:
```basic
INCLUDE "template://default/common.bas"
```
2. **Override functions**:
```basic
FUNCTION handle_greeting()
' Custom greeting logic
TALK "Welcome to MyBot!"
END FUNCTION
```
3. **Add new features**:
```basic
' Add to existing template
FUNCTION new_feature()
' Your custom code
END FUNCTION
```
### Combining Templates
Mix features from multiple templates:
```basic
' Use CRM contact management
INCLUDE "template://crm/contacts.bas"
' Add marketing automation
INCLUDE "template://marketing/campaigns.bas"
' Integrate with APIs
INCLUDE "template://api-client/rest.bas"
```
#### llm-server.gbai
- **Config**: Model serving parameters
- **Features**: GPU configuration
- **Purpose**: Local LLM hosting
## Best Practices
### Template Selection
1. **Start with the right template**: Choose based on primary use case
2. **Combine when needed**: Mix templates for complex requirements
3. **Keep core intact**: Don't modify template originals
4. **Document changes**: Track customizations
1. **Start small**: Begin with `default.gbai`
2. **Match use case**: Choose aligned templates
3. **Combine features**: Mix templates as needed
4. **Keep originals**: Copy before modifying
### Customization Tips
### Customization Strategy
1. **Configuration first**: Adjust config.csv before code
2. **Test incrementally**: Verify each change
3. **Preserve structure**: Maintain template organization
4. **Version control**: Track template modifications
#### Minimal BASIC Approach
Instead of complex dialog flows, use simple LLM calls:
### Performance Considerations
1. **Remove unused features**: Delete unnecessary dialogs
2. **Optimize knowledge base**: Index only needed content
3. **Configure appropriately**: Adjust settings for scale
4. **Monitor resource usage**: Track memory and CPU
## Template Development
### Creating Custom Templates
1. **Start from template.gbai**: Use as foundation
2. **Define clear purpose**: Document template goals
3. **Include examples**: Provide sample data
4. **Write documentation**: Explain usage
5. **Test thoroughly**: Verify all features
### Template Guidelines
- Keep templates focused on specific use cases
- Include comprehensive examples
- Provide clear documentation
- Use consistent naming conventions
- Include error handling
- Make configuration obvious
- Test across channels
## Contributing Templates
To contribute a new template:
1. Create template in `templates/` directory
2. Include README with description
3. Add example configuration
4. Provide sample knowledge base
5. Include test cases
6. Submit pull request
## Template Updates
Templates are versioned and updated regularly:
- Bug fixes
- Security patches
- Feature additions
- Performance improvements
- Documentation updates
Check for updates:
```bash
git pull
diff templates/template-name.gbai
```basic
' Traditional: 100+ lines of intent matching
' BotServer: Let LLM handle it
response = LLM prompt
TALK response
```
## Support
#### Tool Creation
Only create `.bas` files for specific actions:
- API calls
- Database operations
- File processing
- Calculations
For template-specific help:
- Check template README
- Review example code
- Consult documentation
- Ask in community forums
- Report issues on GitHub
#### Knowledge Base Organization
- One folder per topic/collection
- Name folders clearly
- Keep documents updated
- Index automatically
### Performance Tips
- Remove unused template files
- Index only necessary documents
- Configure appropriate cache settings
- Monitor resource usage
## Creating Custom Templates
To create your own template:
1. **Copy `template.gbai`** as starting point
2. **Define clear purpose** - one template, one job
3. **Structure folders** properly:
- `.gbdialog` for scripts
- `.gbkb` for knowledge collections
- `.gbdrive` for general files
- `.gbot` for configuration
4. **Include examples** - sample data and dialogs
5. **Test thoroughly** - verify all features
## Migration Philosophy
When migrating from traditional platforms:
### Remove Complexity
- ❌ Intent detection → ✅ LLM understands naturally
- ❌ State machines → ✅ LLM maintains context
- ❌ Routing logic → ✅ LLM handles flow
- ❌ Entity extraction → ✅ LLM identifies information
### Embrace Simplicity
- Let LLM handle conversations
- Create tools only for actions
- Use knowledge bases for context
- Trust the system's capabilities
## Template Maintenance
- Templates updated with BotServer releases
- Check repository for latest versions
- Review changes before upgrading
- Test in development first
## Support Resources
- README files in each template folder
- Example configurations included
- Sample knowledge bases provided
- Community forums for discussions

View file

@ -1,24 +1,24 @@
## gbkb Reference
The knowledgebase package provides three main commands:
- **USE_KB** Loads and embeds files from the `.gbkb/collection-name` folder into the vector database, making them available for semantic search in the current session. Multiple KBs can be active simultaneously.
- **CLEAR_KB** Removes a knowledge base from the current session (files remain embedded in the vector database).
- **ADD_WEBSITE** Crawl a website and add its pages to a collection.
- **USE KB** Loads and embeds files from the `.gbkb/collection-name` folder into the vector database, making them available for semantic search in the current session. Multiple KBs can be active simultaneously.
- **CLEAR KB** Removes a knowledge base from the current session (files remain embedded in the vector database).
- **ADD WEBSITE** Crawl a website and add its pages to a collection.
**Example:**
```bas
' Add support docs KB - files from work/botname/botname.gbkb/support_docs/ are embedded
USE_KB "support_docs"
USE KB "support_docs"
' Add multiple KBs to the same session
USE_KB "policies"
USE_KB "procedures"
USE KB "policies"
USE KB "procedures"
' Remove a specific KB from session
CLEAR_KB "policies"
CLEAR KB "policies"
' Remove all KBs from session
CLEAR_KB
CLEAR KB
```
The vector database retrieves relevant chunks/excerpts from active KBs and injects them into LLM prompts automatically, providing context-aware responses.

View file

@ -1,43 +1,100 @@
# Caching (Optional)
# Caching
Caching can improve response times for frequently accessed knowledgebase queries.
BotServer includes automatic caching to improve response times and reduce redundant processing.
## InMemory Cache
## How Caching Works
The bot maintains an LRU (leastrecentlyused) cache of the last 100 `FIND` results. This cache is stored in the bots process memory and cleared on restart.
## Persistent Cache
For longerterm caching, the `gbkb` package can write query results to a local SQLite file (`cache.db`). The cache key is a hash of the query string and collection name.
Caching in BotServer is controlled by configuration parameters in `config.csv`. The system automatically caches LLM responses and manages conversation history.
## Configuration
Add the following to `.gbot/config.csv`:
From `default.gbai/default.gbot/config.csv`:
```csv
key,value
cache_enabled,true
cache_max_entries,500
llm-cache,false # Enable/disable LLM response caching
llm-cache-ttl,3600 # Cache time-to-live in seconds
llm-cache-semantic,true # Use semantic similarity for cache matching
llm-cache-threshold,0.95 # Similarity threshold for cache hits
```
## Usage Example
## Conversation History Management
The system manages conversation context through these parameters:
```csv
prompt-history,2 # Number of previous messages to include in context
prompt-compact,4 # Compact conversation after N exchanges
```
### What These Settings Do
- **prompt-history**: Keeps the last 2 exchanges in the conversation context
- **prompt-compact**: After 4 exchanges, older messages are summarized or removed to save tokens
## LLM Response Caching
When `llm-cache` is enabled:
1. User asks a question
2. System checks if a semantically similar question was asked before
3. If similarity > threshold (0.95), returns cached response
4. Otherwise, generates new response and caches it
## Example Usage
```basic
USE_KB "company-policies"
FIND "vacation policy" INTO RESULT ' first call hits VectorDB
FIND "vacation policy" INTO RESULT ' second call hits cache
TALK RESULT
' Caching happens automatically when enabled
USE KB "policies"
' First user asks: "What's the vacation policy?"
' System generates response and caches it
' Second user asks: "Tell me about vacation days"
' System finds cached response (high semantic similarity)
' Returns instantly without calling LLM
```
The second call returns instantly from the cache.
## Cache Storage
## Cache Invalidation
- When a document is added or updated, the cache for that collection is cleared.
- Manual invalidation: `CLEAR_CACHE "company-policies"` (custom keyword provided by the system).
The cache is stored in the cache component (Valkey) when available, providing:
- Fast in-memory access
- Persistence across restarts
- Shared cache across sessions
## Benefits
- Reduces latency for hot queries.
- Lowers load on VectorDB.
- Transparent to the script author; caching is automatic.
- **Faster responses** for common questions
- **Lower costs** by reducing LLM API calls
- **Consistent answers** for similar questions
- **Automatic management** with no code changes
## Best Practices
1. **Enable for FAQ bots** - High cache hit rate
2. **Adjust threshold** - Lower for more cache hits, higher for precision
3. **Set appropriate TTL** - Balance freshness vs performance
4. **Monitor cache hits** - Ensure it's providing value
## Performance Impact
With caching enabled:
- Common questions: <50ms response time
- Cache misses: Normal LLM response time
- Memory usage: Minimal (only stores text responses)
## Clearing Cache
Cache is automatically cleared when:
- TTL expires (after 3600 seconds by default)
- Bot configuration changes
- Knowledge base is updated
- System restarts (if not using persistent cache)
## Important Notes
- Caching is transparent to dialog scripts
- No special commands needed
- Works with all LLM providers
- Respects conversation context
Remember: Caching is configured in `config.csv`, not through BASIC commands!

View file

@ -1,36 +1,98 @@
# Context Compaction
When a conversation grows long, the bots context window can exceed the LLMs token limit. **Context compaction** reduces the stored history while preserving essential information.
Context compaction automatically manages conversation history to stay within token limits while preserving important information.
## Strategies
## How It Works
1. **Summarization** Periodically run `TALK FORMAT` with a summarization prompt and replace older messages with the summary.
2. **Memory Pruning** Use `SET_BOT_MEMORY` to store only key facts (e.g., user name, preferences) and discard raw chat logs.
3. **Chunk Rotation** Keep a sliding window of the most recent *N* messages (configurable via `context_window` in `.gbot/config.csv`).
Context compaction is controlled by two parameters in `config.csv`:
## Implementation Example
```basic
' After 10 exchanges, summarize
IF MESSAGE_COUNT >= 10 THEN
TALK "Summarizing recent conversation..."
SET_BOT_MEMORY "summary" FORMAT(RECENT_MESSAGES, "summarize")
CLEAR_MESSAGES ' removes raw messages
ENDIF
```csv
prompt-history,2 # Keep last 2 message exchanges
prompt-compact,4 # Compact after 4 total exchanges
```
## Configuration
## Configuration Parameters
- `context_window` (in `.gbot/config.csv`) defines how many recent messages are kept automatically.
- `memory_enabled` toggles whether the bot uses persistent memory.
### prompt-history
Determines how many previous exchanges to include in the LLM context:
- Default: `2` (keeps last 2 user messages and 2 bot responses)
- Range: 1-10 depending on your token budget
- Higher values = more context but more tokens used
### prompt-compact
Triggers compaction after N exchanges:
- Default: `4` (compacts conversation after 4 back-and-forth exchanges)
- When reached, older messages are summarized or removed
- Helps manage long conversations efficiently
## Automatic Behavior
The system automatically:
1. Tracks conversation length
2. When exchanges exceed `prompt-compact` value
3. Keeps only the last `prompt-history` exchanges
4. Older messages are dropped from context
## Example Flow
With default settings (`prompt-history=2`, `prompt-compact=4`):
```
Exchange 1: User asks, bot responds
Exchange 2: User asks, bot responds
Exchange 3: User asks, bot responds
Exchange 4: User asks, bot responds
Exchange 5: Compaction triggers - only exchanges 3-4 kept
Exchange 6: Only exchanges 4-5 in context
```
## Benefits
- Keeps token usage within limits.
- Improves response relevance by focusing on recent context.
- Allows longterm facts to persist without bloating the prompt.
- **Automatic management** - No manual intervention needed
- **Token efficiency** - Stay within model limits
- **Relevant context** - Keeps recent, important exchanges
- **Cost savings** - Fewer tokens = lower API costs
## Caveats
## Adjusting Settings
- Overaggressive pruning may lose important details.
- Summaries should be concise (max 200 tokens) to avoid reinflating the context.
### For longer context:
```csv
prompt-history,5 # Keep more history
prompt-compact,10 # Compact less frequently
```
### For minimal context:
```csv
prompt-history,1 # Only last exchange
prompt-compact,2 # Compact aggressively
```
## Use Cases
### Customer Support
- Lower values work well (customers ask independent questions)
- `prompt-history,1` and `prompt-compact,2`
### Complex Discussions
- Higher values needed (maintain conversation flow)
- `prompt-history,4` and `prompt-compact,8`
### FAQ Bots
- Minimal context needed (each question is standalone)
- `prompt-history,1` and `prompt-compact,2`
## Important Notes
- Compaction is automatic based on config.csv
- No BASIC commands control compaction
- Settings apply to all conversations
- Changes require bot restart
## Best Practices
1. **Start with defaults** - Work well for most use cases
2. **Monitor token usage** - Adjust if hitting limits
3. **Consider conversation type** - Support vs discussion
4. **Test different values** - Find optimal balance
The system handles all compaction automatically - just configure the values that work for your use case!

View file

@ -1,22 +1,143 @@
# Document Indexing
When a document is added to a knowledgebase collection with `USE_KB` or `ADD_WEBSITE`, the system performs several steps to make it searchable:
Document indexing in BotServer is automatic. When documents are added to `.gbkb` folders, they are processed and made searchable without any manual configuration.
1. **Content Extraction** Files are read and plaintext is extracted (PDF, DOCX, HTML, etc.).
2. **Chunking** The text is split into 500token chunks to keep embeddings manageable.
3. **Embedding Generation** Each chunk is sent to the configured LLM embedding model (default **BGEsmallenv1.5**) to produce a dense vector.
4. **Storage** Vectors, along with metadata (source file, chunk offset), are stored in VectorDB under the collections namespace.
5. **Indexing** VectorDB builds an IVFPQ index for fast approximate nearestneighbor search.
## Automatic Indexing
## Index Refresh
The system automatically indexes documents when:
- Files are added to any `.gbkb` folder
- `USE KB` is called for a collection
- Files are modified or updated
- `ADD WEBSITE` crawls new content
If a document is updated, the system reprocesses the file and replaces the old vectors. The index is automatically refreshed; no manual action is required.
## How Indexing Works
## Example
1. **Document Detection** - System scans `.gbkb` folders for files
2. **Text Extraction** - Content extracted from PDF, DOCX, HTML, MD, TXT
3. **Chunking** - Text split into manageable segments
4. **Embedding Generation** - Chunks converted to vectors using BGE model
5. **Storage** - Vectors stored for semantic search
## Supported File Types
- **PDF** - Full text extraction
- **DOCX** - Microsoft Word documents
- **TXT** - Plain text files
- **HTML** - Web pages (text only)
- **MD** - Markdown documents
- **CSV** - Structured data
## Website Indexing
To keep web content fresh, schedule regular crawls:
```basic
USE_KB "company-policies"
ADD_WEBSITE "https://example.com/policies"
' In update-docs.bas
SET SCHEDULE "0 2 * * *" ' Run daily at 2 AM
ADD WEBSITE "https://docs.example.com"
' Website is crawled and indexed automatically
```
After execution, the `company-policies` collection contains indexed vectors ready for semantic search via the `FIND` keyword.
### Scheduling Options
```basic
SET SCHEDULE "0 * * * *" ' Every hour
SET SCHEDULE "*/30 * * * *" ' Every 30 minutes
SET SCHEDULE "0 0 * * 0" ' Weekly on Sunday
SET SCHEDULE "0 0 1 * *" ' Monthly on the 1st
```
## Real-Time Updates
Documents are re-indexed automatically when:
- File content changes
- New files appear in folders
- Files are deleted (removed from index)
## Using Indexed Content
Once indexed, content is automatically available:
```basic
USE KB "documentation"
' All documents in the documentation folder are now searchable
' The LLM will use this knowledge when answering questions
```
You don't need to explicitly search - the system does it automatically when generating responses.
## Configuration
Indexing uses settings from `config.csv`:
```csv
embedding-url,http://localhost:8082
embedding-model,../../../../data/llm/bge-small-en-v1.5-f32.gguf
```
The BGE embedding model can be replaced with any compatible model.
## Performance Optimization
The system optimizes indexing by:
- Processing only changed files
- Caching embeddings
- Parallel processing when possible
- Incremental updates
## Example: Knowledge Base Maintenance
Structure your knowledge base:
```
company.gbkb/
├── products/
│ ├── manual-v1.pdf
│ └── specs.docx
├── policies/
│ ├── hr-policy.pdf
│ └── it-policy.md
└── news/
└── updates.html
```
Schedule regular web updates:
```basic
' In maintenance.bas
SET SCHEDULE "0 1 * * *"
' Update news daily
ADD WEBSITE "https://company.com/news"
' Update product docs weekly
IF DAY_OF_WEEK = "Monday" THEN
ADD WEBSITE "https://company.com/products"
END IF
```
## Best Practices
1. **Organize documents** by topic in separate folders
2. **Schedule updates** for web content
3. **Keep files updated** - system handles re-indexing
4. **Monitor folder sizes** - very large collections may impact performance
5. **Use clear naming** - helps with organization
## Troubleshooting
### Documents Not Appearing
- Check file is in a `.gbkb` folder
- Verify file type is supported
- Ensure `USE KB` was called for that collection
### Slow Indexing
- Large PDFs may take time to process
- Consider splitting very large documents
- Check available system resources
### Outdated Content
- Set up scheduled crawls for web content
- Ensure files are being updated
- Check that re-indexing is triggered
Remember: Indexing is automatic - just add documents to folders and use `USE KB` to activate them!

View file

@ -4,10 +4,10 @@
The General Bots system provides **4 essential keywords** for managing Knowledge Bases (KB) and Tools dynamically during conversation sessions:
1. **USE_KB** - Load and embed files from `.gbkb` folders into vector database
2. **CLEAR_KB** - Remove KB from current session
3. **USE_TOOL** - Make a tool available for LLM to call
4. **CLEAR_TOOLS** - Remove all tools from current session
1. **USE KB** - Load and embed files from `.gbkb` folders into vector database
2. **CLEAR KB** - Remove KB from current session
3. **USE TOOL** - Make a tool available for LLM to call
4. **CLEAR TOOLS** - Remove all tools from current session
---
@ -54,22 +54,22 @@ work/
- **HTML** - Web pages (text only)
- **JSON** - Structured data
### USE_KB Keyword
### USE KB Keyword
```basic
USE_KB "circular"
USE KB "circular"
# Loads the 'circular' KB folder into session
# All documents in that folder are now searchable
USE_KB "comunicado"
USE KB "comunicado"
# Adds another KB to the session
# Both 'circular' and 'comunicado' are now active
```
### CLEAR_KB Keyword
### CLEAR KB Keyword
```basic
CLEAR_KB
CLEAR KB
# Removes all loaded KBs from current session
# Frees up memory and context space
```
@ -89,57 +89,44 @@ Tools are **callable functions** that the LLM can invoke to perform specific act
### Tool Definition
Tools are defined in `.gbtool` files with JSON schema:
Tools are defined in .bas files that generate MCP and OpenAI-compatible tool definitions:
```json
{
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name or coordinates"
},
"units": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"default": "celsius"
}
},
"required": ["location"]
},
"endpoint": "https://api.weather.com/current",
"method": "GET"
}
```basic
' weather.bas - becomes a tool automatically
PARAM location AS string
PARAM units AS string DEFAULT "celsius"
DESCRIPTION "Get current weather for a location"
' Tool implementation
weather_data = GET "https://api.weather.com/v1/current?location=" + location
result = LLM "Format this weather data nicely: " + weather_data
TALK result
```
### Tool Registration
Tools can be registered in three ways:
Tools are registered in two ways:
1. **Static Registration** - In bot configuration
2. **Dynamic Loading** - Via USE_TOOL keyword
3. **Auto-discovery** - From `.gbtool` files in work directory
1. **Auto-discovery** - All `.bas` files in `.gbdialog` folder (except start.bas) become tools
2. **Dynamic Loading** - Via USE TOOL keyword for external tools
### USE_TOOL Keyword
### USE TOOL Keyword
```basic
USE_TOOL "weather"
USE TOOL "weather"
# Makes the weather tool available to LLM
USE_TOOL "database_query"
USE TOOL "database_query"
# Adds database query tool to session
USE_TOOL "email_sender"
USE TOOL "email_sender"
# Enables email sending capability
```
### CLEAR_TOOLS Keyword
### CLEAR TOOLS Keyword
```basic
CLEAR_TOOLS
CLEAR TOOLS
# Removes all tools from current session
# LLM can no longer call external functions
```
@ -151,7 +138,7 @@ CLEAR_TOOLS
### Context Lifecycle
1. **Session Start** - Clean slate, no KB or tools
2. **Load Resources** - USE_KB and USE_TOOL as needed
2. **Load Resources** - USE KB and USE TOOL as needed
3. **Active Use** - LLM uses loaded resources
4. **Clear Resources** - CLEAR_KB/CLEAR_TOOLS when done
5. **Session End** - Automatic cleanup
@ -238,7 +225,7 @@ ws.send({
// Load Tool
ws.send({
type: "USE_TOOL",
type: "USE TOOL",
tool_name: "weather"
});
@ -292,12 +279,13 @@ Configuration:
## Error Handling
### Common Errors
### Common Issues
| Error | Cause | Solution |
|-------|-------|----------|
| `KB_NOT_FOUND` | KB folder doesn't exist | Check folder name and path |
| `VECTORDB_ERROR` | Qdrant connection issue | Check vectorDB service |
| `EMBEDDING_FAILED` | OpenAI API error | Check API key and limits |
| `VECTORDB_ERROR` | Vector database connection issue | Check vector database service |
| `EMBEDDING_FAILED` | Embedding API error | Check API key and limits |
| `TOOL_NOT_FOUND` | Tool not registered | Verify tool name |
| `TOOL_EXECUTION_ERROR` | Tool failed to execute | Check tool endpoint/logic |
| `MEMORY_LIMIT` | Too many KBs loaded | Clear unused KBs |
@ -328,10 +316,10 @@ USE_KB "product_docs"
USE_KB "faqs"
# Enable support tools
USE_TOOL "ticket_system"
USE_TOOL "knowledge_search"
USE TOOL "ticket_system"
USE TOOL "knowledge_search"
# Bot now has access to docs and can create tickets
# Bot now has access to docs and can work with tickets
HEAR user_question
# ... process with KB context and tools ...
@ -348,8 +336,8 @@ USE_KB "papers_2024"
USE_KB "citations"
# Enable research tools
USE_TOOL "arxiv_search"
USE_TOOL "citation_formatter"
USE TOOL "arxiv_search"
USE TOOL "citation_formatter"
# Assistant can now search papers and format citations
# ... research session ...
@ -367,11 +355,11 @@ USE_KB "hr_policies"
USE_KB "it_procedures"
# Enable enterprise tools
USE_TOOL "active_directory"
USE_TOOL "jira_integration"
USE_TOOL "slack_notifier"
USE TOOL "active_directory"
USE TOOL "jira_integration"
USE TOOL "slack_notifier"
# Bot can now query AD, create Jira tickets, send Slack messages
# Bot can now query AD, work with Jira, send Slack messages
# ... handle employee request ...
# End of shift cleanup
@ -409,22 +397,16 @@ CLEAR_TOOLS
## Configuration
### Environment Variables
Configuration is handled automatically through `config.csv`. No manual environment variables needed.
```bash
# Vector Database
QDRANT_URL=http://localhost:6333
QDRANT_API_KEY=your_key
<old_text line=473>
- [ ] Tool versioning system
- [ ] Enhanced parallel execution
# Embeddings
OPENAI_API_KEY=your_key
EMBEDDING_MODEL=text-embedding-ada-002
CHUNK_SIZE=1000
CHUNK_OVERLAP=200
# Tools
MAX_TOOLS_PER_SESSION=10
TOOL_TIMEOUT_SECONDS=30
### Platform Expansion
- [ ] More language bindings
- [ ] Additional databases
- [ ] Extended file formats
TOOL_RATE_LIMIT=100
# KB
@ -506,25 +488,9 @@ collection_prefix = "botserver_"
1. Convert function to tool definition
2. Create .gbtool file
3. Implement endpoint/handler
4. Test with USE_TOOL
4. Test with USE TOOL
5. Remove static registration
---
## Future Enhancements
### Planned Features
- **Incremental KB Updates** - Add/remove single documents
- **Multi-language Support** - Embeddings in multiple languages
- **Tool Chaining** - Tools calling other tools
- **KB Versioning** - Track KB changes over time
- **Smart Caching** - Cache frequent searches
- **Tool Analytics** - Usage statistics and optimization
### Roadmap
- Q1 2024: Incremental updates, multi-language
- Q2 2024: Tool chaining, KB versioning
- Q3 2024: Smart caching, analytics
- Q4 2024: Advanced security, enterprise features

View file

@ -2,7 +2,7 @@
## Overview
The BotServer now supports semantic caching for LLM responses using Valkey (Redis-compatible in-memory database). This feature can significantly reduce response times and API costs by intelligently caching and reusing previous LLM responses.
The BotServer now supports semantic caching for LLM responses using Valkey (cache component - a Redis-compatible in-memory database). This feature can significantly reduce response times and API costs by intelligently caching and reusing previous LLM responses.
## Features
@ -169,7 +169,7 @@ Monitor these metrics:
### Cache Not Working
1. Verify Valkey/Redis is running and accessible
1. Verify cache (Valkey) is running and accessible
2. Check `llm-cache` is set to `true` in config.csv
3. Ensure sufficient memory is available in Valkey
4. Check logs for connection errors

View file

@ -1,36 +1,118 @@
# Semantic Search
Semantic search enables the bot to retrieve information based on meaning rather than exact keyword matches. It leverages the vector embeddings stored in VectorDB.
Semantic search in BotServer happens automatically when you use `USE KB`. The system searches for relevant information based on meaning, not just keywords, and injects it into the LLM's context.
## How It Works
## How It Works Automatically
1. **Query Embedding** The users query string is converted into a dense vector using the same embedding model as the documents.
2. **NearestNeighbor Search** VectorDB returns the topk vectors that are closest to the query vector.
3. **Result Formatting** The matching document chunks are concatenated and passed to the LLM as context for the final response.
1. **User asks a question** - Natural language input
2. **Query converted to vector** - Using the embedding model
3. **Search active collections** - Finds semantically similar content
4. **Inject into context** - Relevant chunks added to LLM prompt
5. **Generate response** - LLM answers using the knowledge
## Using the `FIND` Keyword
## Activating Semantic Search
Simply use `USE KB` to enable search for a collection:
```basic
USE_KB "company-policies"
FIND "how many vacation days do I have?" INTO RESULT
TALK RESULT
USE KB "policies"
USE KB "procedures"
' Both collections are now searchable
' No explicit search commands needed
```
- `USE_KB` adds the collection to the session.
- `FIND` performs the semantic search.
- `RESULT` receives the best matching snippet.
When users ask questions, the system automatically searches these collections and provides relevant context to the LLM.
## Parameters
## How Meaning-Based Search Works
- **k** Number of results to return (default 3). Can be overridden with `FIND "query" LIMIT 5 INTO RESULT`.
- **filter** Optional metadata filter, e.g., `FILTER source="policy.pdf"`.
Unlike keyword search, semantic search understands meaning:
## Best Practices
- "How many days off do I get?" matches "vacation policy"
- "What's the return policy?" matches "refund procedures"
- "I'm feeling sick" matches "medical leave guidelines"
- Keep the query concise (12 sentences) for optimal embedding quality.
- Use `FORMAT` to clean up the result before sending to the user.
- Combine with `GET_BOT_MEMORY` to store frequently accessed answers.
The system uses vector embeddings to find conceptually similar content, even when exact words don't match.
## Configuration
Search behavior is controlled by `config.csv`:
```csv
prompt-history,2 # How many previous messages to include
prompt-compact,4 # Compact context after N exchanges
```
These settings manage how much context the LLM receives, not the search itself.
## Multiple Collections
When multiple collections are active, the system searches all of them:
```basic
USE KB "products"
USE KB "support"
USE KB "warranty"
' User: "My laptop won't turn on"
' System searches all three collections for relevant info
```
## Search Quality
The quality of semantic search depends on:
- **Document organization** - Well-structured folders help
- **Embedding model** - BGE model works well, can be replaced
- **Content quality** - Clear, descriptive documents work best
## Real Example
```basic
' In start.bas
USE KB "company-handbook"
' User types: "What's the dress code?"
' System automatically:
' 1. Searches company-handbook for dress code info
' 2. Finds relevant sections about attire
' 3. Injects them into LLM context
' 4. LLM generates natural response with the information
```
## Performance
Semantic search latency is typically <100ms for collections under 50k vectors. Larger collections may require tuning VectorDBs HNSW parameters.
- Search happens in milliseconds
- No configuration needed
- Cached for repeated queries
- Only active collections are searched
## Best Practices
1. **Activate only needed collections** - Don't overload context
2. **Organize content well** - One topic per folder
3. **Use descriptive text** - Helps with matching
4. **Keep documents updated** - Fresh content = better answers
## Common Misconceptions
**Wrong**: You need to call a search function
**Right**: Search happens automatically with `USE KB`
**Wrong**: You need to configure search parameters
**Right**: It works out of the box
**Wrong**: You need special commands to query
**Right**: Users just ask questions naturally
## Troubleshooting
### Not finding relevant content?
- Check the collection is activated with `USE KB`
- Verify documents are in the right folder
- Ensure content is descriptive
### Too much irrelevant content?
- Use fewer collections simultaneously
- Organize documents into more specific folders
- Clear unused collections with `CLEAR KB`
Remember: The beauty of semantic search in BotServer is its simplicity - just `USE KB` and let the system handle the rest!

View file

@ -1,45 +1,124 @@
# Vector Collections
A **vector collection** is a set of documents that have been transformed into vector embeddings for fast semantic similarity search. Each collection lives under a `.gbkb` folder and is identified by a unique name.
A **vector collection** is automatically generated from each folder in `.gbkb`. Each folder becomes a searchable collection that the LLM can use during conversations.
## Creating a Collection
## How Collections Work
Use the `USE_KB` keyword in a dialog script:
Each `.gbkb` folder is automatically:
1. Scanned for documents (PDF, DOCX, TXT, HTML, MD)
2. Text extracted from all files
3. Split into chunks for processing
4. Converted to vector embeddings using BGE model (replaceable)
5. Made available for semantic search
```basic
USE_KB "company-policies"
## Folder Structure
```
botname.gbkb/
├── policies/ # Becomes "policies" collection
├── procedures/ # Becomes "procedures" collection
└── faqs/ # Becomes "faqs" collection
```
This creates a new collection named `company-policies` in the bots knowledge base.
## Using Collections
## Adding Documents
Documents can be added directly from files or by crawling a website:
Simply activate a collection with `USE KB`:
```basic
USE_KB "company-policies" ' loads and embeds all files from .gbkb/company-policies/ folder
ADD_WEBSITE "https://example.com/policies"
USE KB "policies"
' The LLM now has access to all documents in the policies folder
' No need to explicitly search - happens automatically during responses
```
The system will download the content, split it into chunks, generate embeddings using the default LLM model, and store them in the collection.
## Multiple Collections
## Managing Collections
- `USE_KB "collection-name"` loads and embeds files from the `.gbkb/collection-name` folder into the vector database, making them available for semantic search in the current session.
- `CLEAR_KB "collection-name"` removes the collection from the current session (files remain embedded in vector database).
## Use in Dialogs
When a KB is added to a session, the vector database is queried to retrieve relevant document chunks/excerpts that are automatically injected into LLM prompts, providing context-aware responses.
Load multiple collections for comprehensive knowledge:
```basic
USE_KB "company-policies"
FIND "vacation policy" INTO RESULT
TALK RESULT
USE KB "policies"
USE KB "procedures"
USE KB "faqs"
' All three collections are now active
' LLM searches across all when generating responses
```
## Technical Details
## Automatic Document Indexing
- Embeddings are generated with the BGEsmallenv1.5 model.
- Vectors are stored in VectorDB (see Chapter04).
- Each document is chunked into 500token pieces for efficient retrieval.
Documents are indexed automatically when:
- Files are added to `.gbkb` folders
- `USE KB` is called for the first time
- The system detects new or modified files
## Website Indexing
To keep web content updated, schedule regular crawls:
```basic
' In update-content.bas
SET SCHEDULE "0 3 * * *" ' Run daily at 3 AM
ADD WEBSITE "https://example.com/docs"
' Website content is crawled and added to the collection
```
## How Search Works
When `USE KB` is active:
1. User asks a question
2. System automatically searches relevant collections
3. Finds semantically similar content
4. Injects relevant chunks into LLM context
5. LLM generates response using the knowledge
**Important**: Search happens automatically - you don't need to call any search function. Just activate the KB with `USE KB` and ask questions naturally.
## Embeddings Configuration
The system uses BGE embeddings by default:
```csv
embedding-url,http://localhost:8082
embedding-model,../../../../data/llm/bge-small-en-v1.5-f32.gguf
```
You can replace BGE with any compatible embedding model by changing the model path in config.csv.
## Collection Management
- `USE KB "name"` - Activates a collection for the session
- `CLEAR KB` - Removes all active collections
- `CLEAR KB "name"` - Removes a specific collection
## Best Practices
1. **Organize by topic** - One folder per subject area
2. **Name clearly** - Use descriptive folder names
3. **Update regularly** - Schedule website crawls if using web content
4. **Keep files current** - System auto-indexes changes
5. **Don't overload** - Use only necessary collections per session
## Example: Customer Support Bot
```
support.gbkb/
├── products/ # Product documentation
├── policies/ # Company policies
├── troubleshooting/ # Common issues and solutions
└── contact/ # Contact information
```
In your dialog:
```basic
' Activate all support knowledge
USE KB "products"
USE KB "troubleshooting"
' Bot can now answer product questions and solve issues
```
## Performance Notes
- Collections are cached for fast access
- Only active collections consume memory
- Embeddings are generated once and reused
- Changes trigger automatic re-indexing
No manual configuration needed - just organize your documents in folders and use `USE KB` to activate them!

View file

@ -1,157 +1,197 @@
# Chapter 04: UI Customization
# Chapter 04: gbtheme Reference
BotServer provides basic UI customization through configuration parameters in the `config.csv` file. While there's no dedicated `.gbtheme` package type, you can customize the appearance of the web interface using theme parameters.
Themes control how your bot looks in the web interface. A theme is simply a CSS file that changes colors, fonts, and styles.
## Overview
## Quick Start
The web interface theming system allows you to customize:
- Brand colors (primary and secondary)
- Logo image
- Application title
- Logo text
1. Create a `.gbtheme` folder in your bot package
2. Add a CSS file (like `default.css` or `3dbevel.css`)
3. The theme loads automatically when the bot starts
These customizations are applied dynamically to the web interface and broadcast to all connected clients in real-time.
## Theme Configuration
Theme settings are configured in your bot's `config.csv` file located in the `.gbot` directory:
## Theme Structure
```
templates/your-bot.gbai/your-bot.gbot/config.csv
mybot.gbai/
└── mybot.gbtheme/
├── default.css # Main theme
├── 3dbevel.css # Retro Windows 95 style
└── dark.css # Dark mode variant
```
### Available Theme Parameters
## The 3D Bevel Theme
| Parameter | Description | Example |
|-----------|-------------|---------|
| `theme-color1` | Primary theme color | `#0d2b55` |
| `theme-color2` | Secondary theme color | `#fff9c2` |
| `theme-logo` | Logo image URL | `https://example.com/logo.svg` |
| `theme-title` | Browser tab title | `My Custom Bot` |
| `theme-logo-text` | Text displayed with logo | `Company Name` |
| `Theme Color` | Simple color name (alternative) | `green`, `purple`, `indigo` |
The `3dbevel.css` theme gives your bot a classic Windows 95 look with 3D beveled edges:
## Implementation Details
```css
/* Everything uses monospace font for that retro feel */
body, .card, .popover, .input, .button, .menu, .dialog {
font-family: 'IBM Plex Mono', 'Courier New', monospace !important;
background: #c0c0c0 !important;
color: #000 !important;
border-radius: 0 !important; /* No rounded corners */
box-shadow: none !important;
}
### How Theme Changes Work
/* 3D bevel effect on panels */
.card, .popover, .menu, .dialog {
border: 2px solid #fff !important; /* Top/left highlight */
border-bottom: 2px solid #404040 !important; /* Bottom shadow */
border-right: 2px solid #404040 !important; /* Right shadow */
padding: 8px !important;
background: #e0e0e0 !important;
}
1. **Configuration Loading**: When a bot loads, it reads theme parameters from `config.csv`
2. **Drive Monitoring**: The `DriveMonitor` watches for changes to the configuration file
3. **Broadcasting**: Theme changes are broadcast to all connected web clients via WebSocket
4. **Dynamic Application**: The web interface applies theme changes without requiring a page refresh
/* Buttons with 3D effect */
.button, button, input[type="button"], input[type="submit"] {
background: #e0e0e0 !important;
color: #000 !important;
border: 2px solid #fff !important;
border-bottom: 2px solid #404040 !important;
border-right: 2px solid #404040 !important;
padding: 4px 12px !important;
font-weight: bold !important;
}
### Theme Event Structure
/* Input fields look recessed */
input, textarea, select {
background: #fff !important;
color: #000 !important;
border: 2px solid #404040 !important; /* Reversed for inset look */
border-bottom: 2px solid #fff !important;
border-right: 2px solid #fff !important;
}
When theme parameters change, the system broadcasts a `change_theme` event:
/* Classic scrollbars */
::-webkit-scrollbar {
width: 16px !important;
background: #c0c0c0 !important;
}
::-webkit-scrollbar-thumb {
background: #404040 !important;
border: 2px solid #fff !important;
border-bottom: 2px solid #404040 !important;
border-right: 2px solid #404040 !important;
}
```json
{
"event": "change_theme",
"data": {
"color1": "#0d2b55",
"color2": "#fff9c2",
"logo_url": "https://example.com/logo.svg",
"title": "Custom Title",
"logo_text": "Company Name"
}
/* Blue hyperlinks like Windows 95 */
a {
color: #0000aa !important;
text-decoration: underline !important;
}
```
## Web Interface Structure
## How Themes Work
The BotServer web interface consists of:
1. **CSS Variables**: Themes use CSS custom properties for colors
2. **Class Targeting**: Style specific bot UI elements
3. **Important Rules**: Override default styles with `!important`
4. **Font Stacks**: Provide fallback fonts for compatibility
### Main Directories
- `web/desktop/` - Desktop web application
- `chat/` - Chat interface components
- `css/` - Stylesheets
- `js/` - JavaScript files
- `public/` - Static assets
- `web/html/` - Simplified HTML interface
## Creating Your Own Theme
### Key Files
- `index.html` - Main application entry point
- `chat/chat.js` - Chat interface logic with theme handling
- `account.html` - User account management
- `settings.html` - Bot settings interface
Start with this template:
## Customization Examples
```css
/* Basic color scheme */
:root {
--primary: #007bff;
--background: #ffffff;
--text: #333333;
--border: #dee2e6;
}
### Example 1: Corporate Branding
/* Chat container */
.chat-container {
background: var(--background);
color: var(--text);
}
```csv
name,value
theme-color1,#003366
theme-color2,#FFD700
theme-logo,https://company.com/logo.png
theme-title,Corporate Assistant
theme-logo-text,ACME Corp
/* Messages */
.message-user {
background: var(--primary);
color: white;
}
.message-bot {
background: var(--border);
color: var(--text);
}
/* Input area */
.chat-input {
border: 1px solid var(--border);
background: var(--background);
}
```
### Example 2: Simple Color Theme
## Switching Themes
```csv
name,value
Theme Color,teal
Use the `CHANGE THEME` keyword in your BASIC scripts:
```basic
' Switch to retro theme
CHANGE THEME "3dbevel"
' Back to default
CHANGE THEME "default"
' Seasonal themes
month = MONTH(NOW())
IF month = 12 THEN
CHANGE THEME "holiday"
END IF
```
### Example 3: Educational Institution
## Common Theme Elements
```csv
name,value
theme-color1,#1e3a5f
theme-color2,#f0f0f0
theme-logo,https://university.edu/seal.svg
theme-title,Campus Assistant
theme-logo-text,State University
### Message Bubbles
```css
.message {
padding: 10px;
margin: 5px;
border-radius: 10px;
}
```
## Dark Mode Support
### Suggestion Buttons
```css
.suggestion-button {
background: #f0f0f0;
border: 1px solid #ccc;
padding: 8px 16px;
margin: 4px;
cursor: pointer;
}
```
The web interface includes built-in dark mode support with CSS data attributes:
### Input Field
```css
.chat-input {
width: 100%;
padding: 10px;
font-size: 16px;
}
```
- Light mode: `[data-theme="light"]`
- Dark mode: `[data-theme="dark"]`
## Theme Best Practices
The interface automatically adjusts colors, backgrounds, and contrast based on the user's theme preference.
1. **Test on Multiple Browsers**: Ensure compatibility
2. **Use Web-Safe Fonts**: Or include font files
3. **High Contrast**: Ensure readability
4. **Mobile Responsive**: Test on different screen sizes
5. **Keep It Simple**: Don't overcomplicate the CSS
## Limitations
## File Naming
Current theming capabilities are limited to:
- Color customization (2 colors)
- Logo replacement
- Title and text changes
- `default.css` - Loaded automatically as main theme
- `dark.css` - Dark mode variant
- `3dbevel.css` - Special theme (Windows 95 style)
- `[name].css` - Any custom theme name
Advanced customization like:
- Custom CSS injection
- Layout modifications
- Component replacement
- Font changes
## Loading Order
...are not currently supported through configuration. For these changes, you would need to modify the web interface source files directly.
1. System default styles
2. Theme CSS file
3. Inline style overrides (if any)
## Best Practices
1. **Use Web-Safe Colors**: Ensure your color choices have sufficient contrast for accessibility
2. **Logo Format**: Use SVG for logos when possible for better scaling
3. **Logo Hosting**: Host logos on reliable CDNs or your own servers
4. **Title Length**: Keep titles concise to avoid truncation in browser tabs
5. **Test Changes**: Verify theme changes work across different browsers and devices
## Real-Time Updates
One of the key features is real-time theme updates. When you modify the `config.csv` file:
1. Save your changes to `config.csv`
2. The system detects the change automatically
3. All connected clients receive the theme update
4. The interface updates without requiring a refresh
This makes it easy to experiment with different themes and see results immediately.
## Next Steps
- See [Theme Structure](./structure.md) for details on how themes are applied
- See [Web Interface](./web-interface.md) for understanding the UI components
- See [CSS Customization](./css.md) for advanced styling options
- See [HTML Templates](./html.md) for modifying the interface structure
The theme system keeps styling separate from bot logic, making it easy to change the look without touching the code.

View file

@ -0,0 +1,392 @@
# Console Mode
BotServer includes a powerful terminal-based UI for monitoring, debugging, and managing bots directly from the console.
## Overview
Console mode (`--console`) provides a text-based user interface (TUI) using the terminal for full system control without a web browser.
## Launching Console Mode
```bash
# Start BotServer with console UI
./botserver --console
# Console with specific bot
./botserver --console --bot edu
# Console with custom refresh rate
./botserver --console --refresh 500
```
## UI Tree Structure
The console interface uses a tree-based layout for navigation and display:
```
┌─ BotServer Console ────────────────────────────────┐
│ │
│ ▼ System Status │
│ ├─ CPU: 45% │
│ ├─ Memory: 2.3GB / 8GB │
│ ├─ Uptime: 2d 14h 23m │
│ └─ Active Sessions: 127 │
│ │
│ ▼ Bots │
│ ├─ ● default (8 sessions) │
│ ├─ ● edu (45 sessions) │
│ ├─ ○ crm (offline) │
│ └─ ● announcements (74 sessions) │
│ │
│ ▶ Services │
│ ▶ Logs │
│ ▶ Sessions │
│ │
└─────────────────────────────────────────────────────┘
[q]uit [↑↓]navigate [←→]expand [enter]select [h]elp
```
## Navigation
### Keyboard Controls
| Key | Action |
|-----|--------|
| `↑` `↓` | Navigate up/down |
| `←` `→` | Collapse/expand nodes |
| `Enter` | Select/activate item |
| `Tab` | Switch panels |
| `Space` | Toggle item |
| `q` | Quit console |
| `h` | Show help |
| `f` | Filter/search |
| `r` | Refresh display |
| `/` | Quick search |
| `Esc` | Cancel operation |
### Mouse Support
When terminal supports mouse:
- Click to select items
- Double-click to expand/collapse
- Scroll wheel for navigation
- Right-click for context menu
## Console Components
### System Monitor
Real-time system metrics display:
```
System Resources
├─ CPU
│ ├─ Core 0: 23%
│ ├─ Core 1: 45%
│ ├─ Core 2: 67%
│ └─ Core 3: 12%
├─ Memory
│ ├─ Used: 4.2GB
│ ├─ Free: 3.8GB
│ └─ Swap: 0.5GB
└─ Disk
├─ /: 45GB/100GB
└─ /data: 234GB/500GB
```
### Bot Manager
Interactive bot control panel:
```
Bots Management
├─ default.gbai [RUNNING]
│ ├─ Status: Active
│ ├─ Sessions: 23
│ ├─ Memory: 234MB
│ ├─ Requests/min: 45
│ └─ Actions
│ ├─ [R]estart
│ ├─ [S]top
│ ├─ [C]onfig
│ └─ [L]ogs
```
### Service Dashboard
Monitor all services:
```
Services
├─ Database [✓]
│ ├─ Type: PostgreSQL
│ ├─ Connections: 12/100
│ └─ Response: 2ms
├─ Cache [✓]
│ ├─ Type: Valkey
│ ├─ Memory: 234MB
│ └─ Hit Rate: 94%
├─ Storage [✓]
│ ├─ Type: Drive (S3-compatible)
│ ├─ Buckets: 21
│ └─ Usage: 45GB
└─ Vector DB [✗]
└─ Status: Offline
```
### Session Viewer
Live session monitoring:
```
Active Sessions
├─ Session #4f3a2b1c
│ ├─ User: john@example.com
│ ├─ Bot: edu
│ ├─ Duration: 00:12:34
│ ├─ Messages: 23
│ └─ State: active
├─ Session #8d9e7f6a
│ ├─ User: anonymous
│ ├─ Bot: default
│ ├─ Duration: 00:03:21
│ ├─ Messages: 7
│ └─ State: idle
```
### Log Viewer
Filtered log display with levels:
```
Logs [ERROR|WARN|INFO|DEBUG]
├─ 12:34:56 [INFO] Bot 'edu' started
├─ 12:34:57 [DEBUG] Session created: 4f3a2b1c
├─ 12:34:58 [WARN] Cache miss for key: user_123
├─ 12:35:01 [ERROR] Database connection timeout
│ └─ Details: Connection pool exhausted
├─ 12:35:02 [INFO] Reconnecting to database...
```
## Console Features
### Real-time Updates
- Auto-refresh configurable (100ms - 10s)
- WebSocket-based live data
- Efficient diff rendering
- Smooth scrolling
### Interactive Commands
```
Commands (press : to enter command mode)
:help Show help
:quit Exit console
:restart <bot> Restart specific bot
:stop <bot> Stop bot
:start <bot> Start bot
:clear Clear screen
:export <file> Export logs
:filter <pattern> Filter display
:connect <session> Connect to session
```
### BASIC Debugger Integration
Debug BASIC scripts directly in console:
```
BASIC Debugger - enrollment.bas
├─ Breakpoints
│ ├─ Line 12: PARAM validation
│ └─ Line 34: SAVE operation
├─ Variables
│ ├─ name: "John Smith"
│ ├─ email: "john@example.com"
│ └─ course: "Computer Science"
├─ Call Stack
│ ├─ main()
│ ├─ validate_input()
│ └─ > save_enrollment()
└─ Controls
[F5]Run [F10]Step [F11]Into [F9]Break
```
### Performance Monitoring
```
Performance Metrics
├─ Response Times
│ ├─ P50: 45ms
│ ├─ P90: 123ms
│ ├─ P95: 234ms
│ └─ P99: 567ms
├─ Throughput
│ ├─ Current: 234 req/s
│ ├─ Average: 189 req/s
│ └─ Peak: 456 req/s
└─ Errors
├─ Rate: 0.02%
└─ Last: 2 min ago
```
## Console Layouts
### Split Views
```
┌─ Bots ─────────┬─ Logs ──────────┐
│ │ │
│ ● default │ [INFO] Ready │
│ ● edu │ [DEBUG] Session │
│ │ │
├─ Sessions ─────┼─ Metrics ────────┤
│ │ │
│ 4f3a2b1c │ CPU: 45% │
│ 8d9e7f6a │ RAM: 2.3GB │
│ │ │
└────────────────┴──────────────────┘
```
### Focus Mode
Press `F` to focus on single component:
```
┌─ Focused: Log Viewer ───────────────┐
│ │
│ 12:35:01 [ERROR] Connection failed │
│ Stack trace: │
│ at connect() line 234 │
│ at retry() line 123 │
│ at main() line 45 │
│ │
│ 12:35:02 [INFO] Retrying... │
│ │
└──────────────────────────────────────┘
[ESC] to exit focus mode
```
## Color Schemes
### Default Theme
- Background: Terminal default
- Text: White
- Headers: Cyan
- Success: Green
- Warning: Yellow
- Error: Red
- Selection: Blue background
### Custom Themes
Configure in `~/.botserver/console.toml`:
```toml
[colors]
background = "#1e1e1e"
foreground = "#d4d4d4"
selection = "#264f78"
error = "#f48771"
warning = "#dcdcaa"
success = "#6a9955"
```
## Console Configuration
### Settings File
`~/.botserver/console.toml`:
```toml
[general]
refresh_rate = 500
mouse_support = true
unicode_borders = true
time_format = "24h"
[layout]
default = "split"
show_tree_lines = true
indent_size = 2
[shortcuts]
quit = "q"
help = "h"
filter = "f"
```
## Performance Considerations
### Terminal Requirements
- Minimum 80x24 characters
- 256 color support recommended
- UTF-8 encoding for borders
- Fast refresh rate capability
### Optimization Tips
- Use `--refresh 1000` for slower terminals
- Disable unicode with `--ascii`
- Limit log tail with `--log-lines 100`
- Filter unnecessary components
## Remote Console
### SSH Access
```bash
# SSH with console auto-start
ssh user@server -t "./botserver --console"
# Persistent session with tmux
ssh user@server -t "tmux attach || tmux new './botserver --console'"
```
### Security
- Read-only mode: `--console-readonly`
- Audit logging of console actions
- Session timeout configuration
- IP-based access control
## Troubleshooting
### Display Issues
1. **Garbled characters**
- Set `TERM=xterm-256color`
- Ensure UTF-8 locale
- Try `--ascii` mode
2. **Slow refresh**
- Increase refresh interval
- Reduce displayed components
- Check network latency (remote)
3. **Colors not working**
- Verify terminal color support
- Check TERM environment
- Try different terminal emulator
## Integration with Development Tools
### VSCode Integration
- Terminal panel for console
- Task runner integration
- Debug console connection
### Tmux/Screen
- Persistent console sessions
- Multiple console windows
- Session sharing for collaboration
## Console API
### Programmatic Access
```python
# Python example
from botserver_console import Console
console = Console("localhost:8080")
console.connect()
# Get system stats
stats = console.get_system_stats()
print(f"CPU: {stats.cpu}%")
# Monitor sessions
for session in console.watch_sessions():
print(f"Session {session.id}: {session.state}")
```
## Summary
Console mode provides a powerful, efficient interface for managing BotServer without leaving the terminal. Perfect for server administration, debugging, and monitoring in headless environments or over SSH connections.

View file

@ -0,0 +1,366 @@
# Desktop Mode & Mobile Apps
BotServer includes a complete desktop interface and mobile app support for rich conversational experiences beyond simple chat.
## Overview
Desktop mode (`--desktop`) transforms BotServer into a full-featured workspace with integrated tools for communication, collaboration, and productivity.
## Launching Desktop Mode
```bash
# Start BotServer in desktop mode
./botserver --desktop
# With custom port
./botserver --desktop --port 8080
# Mobile-optimized interface
./botserver --mobile
```
## Desktop Components
### Chat Interface (`/chat`)
The main conversational interface with enhanced features:
- Multi-session support
- File attachments and sharing
- Rich media rendering
- Conversation history
- Quick actions and suggestions
- Voice input/output
- Screen sharing capabilities
### Attendant (`/attendant`)
AI-powered personal assistant features:
- Calendar integration
- Task management
- Reminders and notifications
- Meeting scheduling
- Contact management
- Email summaries
- Daily briefings
### Drive Integration (`/drive`)
File management and storage interface:
- Browse object storage buckets
- Upload/download files
- Share documents with chat
- Preview documents
- Organize bot resources
- Version control for files
- Collaborative editing support
### Mail Client (`/mail`)
Integrated email functionality:
- Send/receive emails through bots
- AI-powered email composition
- Smart inbox filtering
- Email-to-task conversion
- Automated responses
- Template management
- Thread summarization
### Meeting Room (`/meet`)
Video conferencing and collaboration:
- WebRTC-based video calls
- Screen sharing
- Recording capabilities
- AI meeting notes
- Real-time transcription
- Meeting bot integration
- Calendar sync
### Task Management (`/tasks`)
Project and task tracking:
- Kanban boards
- Sprint planning
- Time tracking
- Bot automation for tasks
- Progress reporting
- Team collaboration
- Integration with chat
### Account Settings (`/account.html`)
User profile and preferences:
- Profile management
- Authentication settings
- API keys management
- Subscription details
- Usage statistics
- Privacy controls
- Data export
### System Settings (`/settings.html`)
Application configuration:
- Theme customization
- Language preferences
- Notification settings
- Bot configurations
- Integration settings
- Performance tuning
- Debug options
## Mobile Application
### Progressive Web App (PWA)
BotServer desktop mode works as a PWA:
- Install on mobile devices
- Offline capabilities
- Push notifications
- Native app experience
- Responsive design
- Touch-optimized UI
### Mobile Features
- Swipe gestures for navigation
- Voice-first interaction
- Location sharing
- Camera integration
- Contact integration
- Mobile-optimized layouts
- Reduced data usage mode
### Installation on Mobile
#### Android
1. Open BotServer URL in Chrome
2. Tap "Add to Home Screen"
3. Accept installation prompt
4. Launch from home screen
#### iOS
1. Open in Safari
2. Tap Share button
3. Select "Add to Home Screen"
4. Name the app and add
## Desktop Interface Structure
```
web/desktop/
├── index.html # Main desktop dashboard
├── account.html # User account management
├── settings.html # Application settings
├── chat/ # Chat interface components
├── attendant/ # AI assistant features
├── drive/ # File management
├── mail/ # Email client
├── meet/ # Video conferencing
├── tasks/ # Task management
├── css/ # Stylesheets
├── js/ # JavaScript modules
└── public/ # Static assets
```
## Features by Screen
### Dashboard (index.html)
- Widget-based layout
- Quick access tiles
- Recent conversations
- Pending tasks
- Calendar view
- System notifications
- Bot status indicators
### Chat Screen
- Conversation list
- Message composer
- Rich text formatting
- Code syntax highlighting
- File attachments
- Emoji picker
- Message reactions
- Thread support
### Drive Screen
- File browser
- Folder navigation
- Upload queue
- Preview pane
- Sharing controls
- Storage metrics
- Search functionality
### Mail Screen
- Inbox/Sent/Drafts
- Message composer
- Rich HTML editor
- Attachment handling
- Contact autocomplete
- Filter and labels
- Bulk operations
## Responsive Design
### Breakpoints
```css
/* Mobile: < 768px */
/* Tablet: 768px - 1024px */
/* Desktop: > 1024px */
/* Wide: > 1440px */
```
### Adaptive Layouts
- Mobile: Single column, bottom navigation
- Tablet: Two-column with collapsible sidebar
- Desktop: Three-column with persistent panels
- Wide: Multi-panel with docked windows
## Theming
### CSS Variables
```css
:root {
--primary-color: #0d2b55;
--secondary-color: #fff9c2;
--background: #ffffff;
--text-color: #333333;
--border-color: #e0e0e0;
}
```
### Dark Mode
Automatic dark mode based on:
- System preferences
- Time of day
- User selection
- Per-component overrides
## Performance
### Optimization Strategies
- Lazy loading of components
- Virtual scrolling for long lists
- Image optimization and CDN
- Code splitting by route
- Service worker caching
- WebAssembly for compute tasks
### Resource Management
- Maximum 50MB cache size
- Automatic cleanup of old data
- Compressed asset delivery
- Efficient WebSocket usage
- Battery-aware processing
## Security
### Authentication
- OAuth2/OIDC support
- Biometric authentication (mobile)
- Session management
- Secure token storage
- Auto-logout on inactivity
### Data Protection
- End-to-end encryption for sensitive data
- Local storage encryption
- Secure WebSocket connections
- Content Security Policy
- XSS protection
## Offline Capabilities
### Service Worker
- Cache-first strategy for assets
- Network-first for API calls
- Background sync for messages
- Offline message queue
- Automatic retry logic
### Local Storage
- IndexedDB for structured data
- localStorage for preferences
- Cache API for resources
- File system access (desktop)
## Integration APIs
### JavaScript SDK
```javascript
// Initialize desktop mode
const desktop = new BotDesktop({
server: 'ws://localhost:8080',
theme: 'auto',
modules: ['chat', 'drive', 'mail']
});
// Subscribe to events
desktop.on('message', (msg) => {
console.log('New message:', msg);
});
// Send commands
desktop.chat.send('Hello from desktop!');
```
## Debugging
### Developer Tools
- Console logging levels
- Network request inspector
- WebSocket frame viewer
- Performance profiler
- Memory leak detector
- Component tree inspector
### Debug Mode
```bash
# Enable debug mode
./botserver --desktop --debug
# Verbose logging
./botserver --desktop --verbose
```
## Deployment
### Web Server Configuration
```nginx
location /desktop {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
```
### CDN Setup
- Static assets on CDN
- Dynamic content from server
- Geographic distribution
- Cache invalidation strategy
## Troubleshooting
### Common Issues
1. **Blank screen on load**
- Check JavaScript console
- Verify WebSocket connection
- Clear browser cache
2. **Slow performance**
- Reduce active modules
- Clear local storage
- Check network latency
3. **PWA not installing**
- Ensure HTTPS connection
- Valid manifest.json
- Service worker registered
## Future Enhancements
- Native mobile apps (React Native)
- Desktop app (Electron)
- AR/VR interfaces
- Voice-only mode
- Collaborative whiteboards
- Plugin marketplace
## Summary
Desktop mode transforms BotServer from a simple chatbot platform into a comprehensive AI-powered workspace. With mobile PWA support, users can access all features from any device while maintaining a consistent, responsive experience.

View file

@ -1,32 +1,162 @@
# Web Interface
The **gbtheme** web interface provides the frontend experience for end users. It consists of three core HTML pages and a set of JavaScript modules that handle realtime communication with the bot server.
The **gbtheme** web interface provides the front-end experience for end users through a simple REST API architecture.
## Pages
## Interface Components
| Page | Purpose |
|------|---------|
| `index.html` | Application shell, loads the main JavaScript bundle and displays the navigation bar. |
| `chat.html` | Primary conversation view shows the chat transcript, input box, and typing indicator. |
| `login.html` | Simple authentication screen used when the bot is configured with a login flow. |
| Component | Purpose |
|-----------|---------|
| Chat UI | Main conversation interface with input and message display |
| REST API | HTTP endpoints for message exchange |
| CSS Theme | Visual customization through CSS variables |
## JavaScript Modules
## REST API Endpoints
* **app.js** Initializes the WebSocket connection, routes incoming bot messages to the UI, and sends user input (`TALK`) back to the server.
* **websocket.js** Lowlevel wrapper around the browsers `WebSocket` API, handling reconnection logic and ping/pong keepalive.
The bot communicates through standard HTTP REST endpoints:
## Interaction Flow
```
POST /api/message Send user message
GET /api/session Get session info
POST /api/upload Upload files
GET /api/history Get conversation history
```
1. **Load** `index.html` loads `app.js`, which creates a `WebSocket` to `ws://<host>/ws`.
2. **Handshake** The server sends a `HELLO` message containing bot metadata (name, version).
3. **User Input** When the user presses *Enter* in the chat input, `app.js` sends a `TALK` JSON payload.
4. **Bot Response** The server streams `MESSAGE` events; `app.js` appends them to the chat window.
5. **Typing Indicator** While the LLM processes, the server sends a `TYPING` event; the UI shows an animated ellipsis.
## Message Flow
## Customization Points
1. **User Input** - User types message in chat interface
2. **API Call** - Frontend sends POST to `/api/message`
3. **Processing** - Server processes with LLM and tools
4. **Response** - JSON response with bot message
5. **Display** - Frontend renders response in chat
* **CSS Variables** Override colors, fonts, and spacing in `css/main.css` (`:root { --primary-color: … }`).
* **HTML Layout** Replace the `<header>` or `<footer>` sections in `index.html` to match branding.
* **JS Hooks** Add custom event listeners in `app.js` (e.g., analytics on `MESSAGE` receipt).
## Simple Integration
All files are located under the themes `web/` and `js/` directories as described in the [Theme Structure](./structure.md).
```javascript
// Send message to bot
async function sendMessage(text) {
const response = await fetch('/api/message', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message: text })
});
const data = await response.json();
displayMessage(data.response);
}
```
## Theme Customization
The interface uses CSS variables for easy customization:
```css
/* In your theme's default.css */
:root {
--primary-color: #0d2b55;
--secondary-color: #fff9c2;
--background: #ffffff;
--text-color: #333333;
--font-family: 'Inter', sans-serif;
}
```
## Response Format
The API returns simple JSON responses:
```json
{
"response": "Bot message text",
"session_id": "uuid",
"timestamp": "2024-01-01T12:00:00Z",
"tools_used": ["weather", "calendar"]
}
```
## File Uploads
Users can upload files through the standard multipart form:
```
POST /api/upload
Content-Type: multipart/form-data
Returns:
{
"file_id": "uuid",
"status": "processed",
"extracted_text": "..."
}
```
## Session Management
Sessions are handled automatically through cookies or tokens:
- Session created on first message
- Persisted across conversations
- Context maintained server-side
## Mobile Responsive
The default interface is mobile-first:
- Touch-friendly input
- Responsive layout
- Optimized for small screens
- Progressive enhancement
## Accessibility
Built-in accessibility features:
- Keyboard navigation
- Screen reader support
- High contrast mode support
- Focus indicators
## Performance
Optimized for speed:
- Minimal JavaScript
- CSS-only animations
- Lazy loading
- CDN-ready assets
## Browser Support
Works on all modern browsers:
- Chrome 90+
- Firefox 88+
- Safari 14+
- Edge 90+
- Mobile browsers
## Integration Examples
### Embed in Website
```html
<iframe src="https://bot.example.com"
width="400"
height="600">
</iframe>
```
### Custom Frontend
```javascript
// Use any frontend framework
const BotClient = {
async send(message) {
return fetch('/api/message', {
method: 'POST',
body: JSON.stringify({ message })
});
}
};
```
## Security
- CORS configured for embedding
- CSRF protection on POST requests
- Rate limiting per session
- Input sanitization
- XSS prevention
All theming is done through simple CSS files as described in the [Theme Structure](./structure.md).

View file

@ -31,6 +31,6 @@ ENDIF
## Best Practices
* Keep scripts short; split complex flows into multiple `.gbdialog` files and `USE_TOOL` them.
* Use `SET_BOT_MEMORY` for data that must persist across sessions.
* Keep scripts short; split complex flows into multiple `.gbdialog` files.
* Use `SET BOT MEMORY` for data that must persist across sessions.
* Avoid heavy computation inside the script; offload to LLM or external tools.

View file

@ -1,196 +0,0 @@
# USE_KB
Load and activate a knowledge base collection for the current conversation.
## Syntax
```basic
USE_KB kb_name
```
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `kb_name` | String | Name of the knowledge base collection to load |
## Description
The `USE_KB` keyword loads a knowledge base collection into the current session's context, making its documents searchable via `FIND` and available to the LLM for context-aware responses. Knowledge bases are vector collections stored in Qdrant containing indexed documents, FAQs, or other reference materials.
Multiple knowledge bases can be active simultaneously, allowing the bot to access diverse information sources.
## Examples
### Load Single Knowledge Base
```basic
USE_KB "product-docs"
answer = FIND "installation guide"
TALK answer
```
### Load Multiple Knowledge Bases
```basic
USE_KB "company-policies"
USE_KB "hr-handbook"
USE_KB "benefits-guide"
question = HEAR "What's the vacation policy?"
answer = FIND question
TALK answer
```
### Conditional KB Loading
```basic
department = HEAR "Which department are you from?"
IF department = "engineering" THEN
USE_KB "technical-docs"
USE_KB "api-reference"
ELSE IF department = "sales" THEN
USE_KB "product-catalog"
USE_KB "pricing-guide"
ELSE
USE_KB "general-info"
END IF
```
### Dynamic KB Selection
```basic
topic = DETECT_TOPIC(user_message)
kb_name = "kb_" + topic
USE_KB kb_name
```
## Knowledge Base Types
Common KB collections include:
- **Documentation**: Product manuals, guides, tutorials
- **FAQs**: Frequently asked questions and answers
- **Policies**: Company policies, procedures, guidelines
- **Products**: Catalogs, specifications, pricing
- **Support**: Troubleshooting guides, known issues
- **Legal**: Terms, contracts, compliance documents
## KB Naming Convention
Knowledge bases follow this naming pattern:
- Format: `category_subcategory_version`
- Examples: `docs_api_v2`, `support_faq_current`, `products_2024`
## Loading Behavior
When `USE_KB` is called:
1. Checks if KB exists in Qdrant
2. Loads vector embeddings into memory
3. Adds to session's active KB list
4. Makes content searchable
5. Updates context for LLM
## Memory Management
- KBs remain loaded for entire session
- Use `CLEAR_KB` to unload specific KB
- Maximum 10 KBs active simultaneously
- Automatically cleared on session end
## Error Handling
```basic
TRY
USE_KB "special-docs"
TALK "Knowledge base loaded successfully"
CATCH "kb_not_found"
TALK "That knowledge base doesn't exist"
USE_KB "default-docs" ' Fallback
CATCH "kb_error"
LOG "Failed to load KB"
TALK "Having trouble accessing documentation"
END TRY
```
## Performance Considerations
- First load may take 1-2 seconds
- Subsequent queries are cached
- Large KBs (>10,000 documents) may impact response time
- Consider loading only necessary KBs
## KB Content Management
### Creating Knowledge Bases
KBs are created from document collections in `.gbkb` packages:
```
mybot.gbkb/
├── docs/ # Becomes "docs" KB
├── faqs/ # Becomes "faqs" KB
└── policies/ # Becomes "policies" KB
```
### Updating Knowledge Bases
- KBs are indexed during bot deployment
- Updates require re-indexing
- Use version suffixes for updates
## Best Practices
1. **Load Relevant KBs Early**: Load at conversation start
2. **Use Descriptive Names**: Make KB purpose clear
3. **Limit Active KBs**: Don't load unnecessary collections
4. **Clear When Done**: Remove KBs when changing context
5. **Handle Missing KBs**: Always have fallback options
6. **Version Your KBs**: Track KB updates with versions
## Integration with Other Keywords
- **[FIND](./keyword-find.md)**: Search within loaded KBs
- **[CLEAR_KB](./keyword-clear-kb.md)**: Unload knowledge bases
- **[ADD_WEBSITE](./keyword-add-website.md)**: Create KB from website
- **[LLM](./keyword-llm.md)**: Use KB context in responses
## Advanced Usage
### KB Metadata
```basic
kb_info = GET_KB_INFO("product-docs")
TALK "KB contains " + kb_info.doc_count + " documents"
TALK "Last updated: " + kb_info.update_date
```
### Conditional Loading Based on Language
```basic
language = GET_USER_LANGUAGE()
USE_KB "docs_" + language ' docs_en, docs_es, docs_fr
```
### KB Search with Filters
```basic
USE_KB "all-products"
' Search only recent products
results = FIND_WITH_FILTER "wireless", "year >= 2023"
```
## Troubleshooting
### KB Not Loading
- Verify KB name is correct
- Check if KB was properly indexed
- Ensure Qdrant service is running
- Review bot logs for errors
### Slow Performance
- Reduce number of active KBs
- Optimize KB content (remove duplicates)
- Check Qdrant server resources
- Consider KB partitioning
### Incorrect Results
- Verify KB contains expected content
- Check document quality
- Review indexing settings
- Test with specific queries
## Implementation
Located in `src/basic/keywords/use_kb.rs`
The implementation connects to Qdrant vector database and manages KB collections per session.

View file

@ -173,4 +173,4 @@ END IF
Located in `src/basic/keywords/add_suggestion.rs`
Uses Redis cache for storage when available, falls back to in-memory storage.
Uses cache component for storage when available, falls back to in-memory storage.

View file

@ -1,242 +0,0 @@
# USE_TOOL
Add and activate a tool (custom dialog script) for the current conversation.
## Syntax
```basic
USE_TOOL "tool-name.bas"
```
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `tool-name.bas` | String | Path to the tool's BASIC script file |
## Description
The `USE_TOOL` keyword dynamically loads a tool definition from a `.bas` file and makes its functionality available in the current conversation. Tools are reusable dialog scripts that extend the bot's capabilities with custom functions, API integrations, or specialized workflows.
Once loaded, the tool's keywords and functions become available for use in the conversation until the session ends or the tool is explicitly cleared.
## Examples
### Load a Simple Tool
```basic
USE_TOOL "weather.bas"
' Now weather functions are available
result = GET_WEATHER("New York")
TALK result
```
### Load Multiple Tools
```basic
USE_TOOL "calculator.bas"
USE_TOOL "translator.bas"
USE_TOOL "scheduler.bas"
' All three tools are now active
sum = CALCULATE("150 + 250")
translated = TRANSLATE(sum, "Spanish")
SCHEDULE_REMINDER(translated, "tomorrow")
```
### Conditional Tool Loading
```basic
task = HEAR "What would you like to do?"
IF task CONTAINS "email" THEN
USE_TOOL "email-composer.bas"
ELSE IF task CONTAINS "calendar" THEN
USE_TOOL "calendar-manager.bas"
ELSE IF task CONTAINS "document" THEN
USE_TOOL "document-processor.bas"
END IF
```
### Tool with Parameters
```basic
' Some tools accept configuration
USE_TOOL "api-client.bas"
CONFIGURE_API("https://api.example.com", api_key)
response = CALL_API("GET", "/users")
```
## Tool Structure
Tools are BASIC scripts that define:
- **Functions**: Reusable operations
- **Keywords**: Custom commands
- **Integrations**: API connections
- **Workflows**: Multi-step processes
Example tool file (`calculator.bas`):
```basic
FUNCTION CALCULATE(expression)
result = EVAL(expression)
RETURN result
END FUNCTION
FUNCTION PERCENTAGE(value, percent)
RETURN value * percent / 100
END FUNCTION
```
## Tool Discovery
Tools are discovered from:
1. `.gbdialog/tools/` directory in bot package
2. System tools directory (`/opt/tools/`)
3. User tools directory (`~/.gbtools/`)
4. Inline tool definitions
## Return Value
Returns a status object:
- `success`: Boolean indicating if tool loaded
- `tool_name`: Name of the loaded tool
- `functions_added`: List of new functions available
- `error`: Error message if loading failed
## Tool Compilation
When a tool is loaded:
1. Script is parsed and validated
2. Functions are compiled to MCP format
3. OpenAI function format generated
4. Tool registered in session context
5. Functions become callable
## Session Scope
- Tools are session-specific
- Don't affect other conversations
- Automatically unloaded on session end
- Can be manually removed with `CLEAR_TOOLS`
## Error Handling
```basic
TRY
USE_TOOL "advanced-tool.bas"
TALK "Tool loaded successfully"
CATCH "tool_not_found"
TALK "Tool file doesn't exist"
CATCH "compilation_error"
TALK "Tool has syntax errors"
CATCH "permission_denied"
TALK "Not authorized to use this tool"
END TRY
```
## Best Practices
1. **Load Tools Early**: Load at conversation start when possible
2. **Check Dependencies**: Ensure required services are available
3. **Handle Failures**: Always have fallback behavior
4. **Document Tools**: Include usage comments in tool files
5. **Version Tools**: Use version numbers in tool names
6. **Test Thoroughly**: Validate tools before deployment
7. **Limit Tool Count**: Don't load too many tools at once
## Tool Management
### List Active Tools
```basic
tools = GET_ACTIVE_TOOLS()
FOR EACH tool IN tools
TALK "Active tool: " + tool.name
NEXT
```
### Check Tool Status
```basic
IF IS_TOOL_ACTIVE("calculator.bas") THEN
result = CALCULATE("2+2")
ELSE
USE_TOOL "calculator.bas"
END IF
```
### Tool Versioning
```basic
' Load specific version
USE_TOOL "reporter-v2.bas"
' Check version
version = GET_TOOL_VERSION("reporter")
IF version < 2 THEN
CLEAR_TOOLS()
USE_TOOL "reporter-v2.bas"
END IF
```
## Advanced Features
### Tool Chaining
```basic
USE_TOOL "data-fetcher.bas"
data = FETCH_DATA(source)
USE_TOOL "data-processor.bas"
processed = PROCESS_DATA(data)
USE_TOOL "report-generator.bas"
report = GENERATE_REPORT(processed)
```
### Dynamic Tool Creation
```basic
' Create tool from template
CREATE_TOOL_FROM_TEMPLATE("custom-api", api_config)
USE_TOOL "custom-api.bas"
```
### Tool Permissions
```basic
IF HAS_PERMISSION("admin-tools") THEN
USE_TOOL "admin-console.bas"
ELSE
TALK "Admin tools require elevated permissions"
END IF
```
## Performance Considerations
- Tool compilation happens once per session
- Compiled tools are cached
- Large tools may increase memory usage
- Consider lazy loading for complex tools
## Troubleshooting
### Tool Not Loading
- Check file path is correct
- Verify `.bas` extension
- Ensure file has read permissions
- Check for syntax errors in tool
### Functions Not Available
- Confirm tool loaded successfully
- Check function names match exactly
- Verify no naming conflicts
- Review tool compilation logs
### Performance Issues
- Limit number of active tools
- Use lighter tool versions
- Consider tool optimization
- Check for infinite loops in tools
## Related Keywords
- [CLEAR_TOOLS](./keyword-clear-tools.md) - Remove all tools
- [GET](./keyword-get.md) - Often used within tools
- [LLM](./keyword-llm.md) - Tools can enhance LLM capabilities
- [FORMAT](./keyword-format.md) - Format tool outputs
## Implementation
Located in `src/basic/keywords/use_tool.rs`
Integrates with the tool compiler to dynamically load and compile BASIC tool scripts into callable functions.

View file

@ -0,0 +1,84 @@
# CHANGE THEME
## Syntax
```basic
CHANGE THEME theme-name
```
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| theme-name | String | Name of the CSS theme file (without .css extension) |
## Description
Changes the visual theme of the bot interface by loading a different CSS file from the `.gbtheme` folder. The change applies immediately to all connected users.
## Examples
### Basic Theme Switch
```basic
' Switch to dark mode
CHANGE THEME "dark"
' Back to default
CHANGE THEME "default"
' Retro Windows 95 style
CHANGE THEME "3dbevel"
```
### Conditional Theme
```basic
hour = HOUR(NOW())
IF hour >= 18 OR hour < 6 THEN
CHANGE THEME "dark"
ELSE
CHANGE THEME "light"
END IF
```
### User Preference
```basic
TALK "Which theme would you prefer?"
ADD SUGGESTION "default" AS "Default"
ADD SUGGESTION "dark" AS "Dark Mode"
ADD SUGGESTION "3dbevel" AS "Retro Style"
HEAR choice
CHANGE THEME choice
SET BOT MEMORY "user_theme" AS choice
TALK "Theme changed!"
```
### Seasonal Themes
```basic
month = MONTH(NOW())
IF month = 12 THEN
CHANGE THEME "holiday"
ELSE IF month >= 6 AND month <= 8 THEN
CHANGE THEME "summer"
ELSE
CHANGE THEME "default"
END IF
```
## Notes
- Theme files must be in the `.gbtheme` folder
- Don't include the `.css` extension in the theme name
- Changes apply to all connected users immediately
- If theme file doesn't exist, falls back to default
## Related
- [Chapter 04: gbtheme Reference](../chapter-04/README.md)
- [CSS Customization](../chapter-04/css.md)

View file

@ -1,37 +1 @@
# FIND Keyword
**Syntax**
```
FIND "table-name", "filter-expression"
```
**Parameters**
- `"table-name"` The name of the database table to query.
- `"filter-expression"` A simple `column=value` expression used to filter rows.
**Description**
`FIND` executes a readonly query against the configured PostgreSQL database. It builds a SQL statement of the form:
```sql
SELECT * FROM table-name WHERE filter-expression LIMIT 10
```
The keyword returns an array of dynamic objects representing the matching rows. The result can be used directly in BASIC scripts or passed to other keywords (e.g., `TALK`, `FORMAT`). Errors during query execution are logged and returned as runtime errors.
**Example**
```basic
SET results = FIND "customers", "country=US"
TALK "Found " + LENGTH(results) + " US customers."
```
The script retrieves up to ten rows from the `customers` table where the `country` column equals `US` and stores them in `results`. The `LENGTH` function (provided by the BASIC runtime) can then be used to count the rows.
**Implementation Notes**
- The filter expression is parsed by `utils::parse_filter` and bound safely to prevent SQL injection.
- Only a limited subset of SQL is supported (simple equality filters). Complex queries should be performed via custom tools or the `GET` keyword.
- The keyword runs synchronously within the script but performs the database call on a separate thread to avoid blocking the engine.
# FIND

View file

@ -18,7 +18,7 @@ GET "source" INTO variable
- `"source"` — The location of the content to retrieve.
This can be:
- An HTTP/HTTPS URL (e.g., `"https://api.example.com/data"`)
- A relative path to a file stored in the bots MinIO bucket or local drive.
- A relative path to a file stored in the bot's drive bucket or local storage.
- `variable` — The variable that will receive the fetched content.
---
@ -27,7 +27,7 @@ GET "source" INTO variable
`GET` performs a read operation from the specified source.
If the source is a URL, the bot sends an HTTP GET request and retrieves the response body.
If the source is a file path, the bot reads the file content directly from its configured storage (e.g., MinIO or local filesystem).
If the source is a file path, the bot reads the file content directly from its configured storage (e.g., drive component or local filesystem).
The command automatically handles text extraction from PDF and DOCX files, converting them to plain UTF8 text.
If the request fails or the file cannot be found, an error message is returned.

View file

@ -256,4 +256,4 @@ END TRY
Located in `src/basic/keywords/remember.rs`
Uses persistent storage (PostgreSQL) with caching layer (Redis) for performance.
Uses persistent storage (PostgreSQL) with caching layer (cache component) for performance.

View file

@ -1,37 +0,0 @@
# REMOVE_TOOL Keyword
**Syntax**
```
REMOVE_TOOL "tool-path.bas"
```
**Parameters**
- `"tool-path.bas"` Relative path to a `.bas` file that was previously added with `USE_TOOL`.
**Description**
`REMOVE_TOOL` disassociates a previously added tool from the current conversation session. After execution, the tools keywords are no longer available for invocation in the same dialog.
The keyword performs the following steps:
1. Extracts the tool name from the provided path (removing the `.bas` extension and any leading `.gbdialog/` prefix).
2. Validates that the tool name is not empty.
3. Spawns an asynchronous task that:
- Deletes the corresponding row from `session_tool_associations` for the current session.
- Returns a message indicating whether the tool was removed or was not active.
**Example**
```basic
REMOVE_TOOL "enrollment.bas"
TALK "Enrollment tool removed from this conversation."
```
If the `enrollment.bas` tool was active, it will be removed; otherwise the keyword reports that the tool was not active.
**Implementation Notes**
- The operation runs in a separate thread with its own Tokio runtime to avoid blocking the main engine.
- Errors during database deletion are logged and propagated as runtime errors.

View file

@ -52,7 +52,7 @@ TALK "Support mode activated. Please describe your issue."
- Implemented in Rust under `src/context/mod.rs` and `src/context/langcache.rs`.
- The keyword interacts with the session manager and context cache to update the active context.
- Contexts are stored in memory and optionally persisted in Redis or a local cache file.
- Contexts are stored in memory and optionally persisted in cache component or a local cache file.
- Changing context may trigger automatic loading of associated tools or memory entries.
---

View file

@ -51,7 +51,7 @@ FIND "recent orders"
- Implemented in Rust under `src/session/mod.rs` and `src/org/mod.rs`.
- The keyword interacts with the session manager to update the active user ID.
- It ensures that all subsequent operations are scoped to the correct user context.
- If Redis or database caching is enabled, the user ID is stored for persistence across sessions.
- If cache component or database caching is enabled, the user ID is stored for persistence across sessions.
---

View file

@ -1,186 +1 @@
# USE KB
Load a knowledge base collection into the current session for semantic search and context.
## Syntax
```basic
USE KB kb_name
```
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `kb_name` | String | Name of the knowledge base collection to load |
## Description
The `USE KB` keyword loads a knowledge base collection into the current session's context, making its documents searchable via `FIND` and available to the LLM for context-aware responses. Knowledge bases are vector collections stored in Qdrant containing indexed documents, FAQs, or other reference materials.
## Examples
### Load Single Knowledge Base
```basic
USE KB "product-docs"
answer = FIND "installation guide"
TALK answer
```
### Load Multiple Knowledge Bases
```basic
USE KB "company-policies"
USE KB "hr-handbook"
USE KB "benefits-guide"
question = HEAR "What's the vacation policy?"
answer = FIND question
TALK answer
```
### Conditional KB Loading
```basic
department = HEAR "Which department are you from?"
IF department = "engineering" THEN
USE KB "technical-docs"
USE KB "api-reference"
ELSE IF department = "sales" THEN
USE KB "product-catalog"
USE KB "pricing-guide"
ELSE
USE KB "general-info"
END IF
```
### Dynamic KB Selection
```basic
topic = DETECT_TOPIC(user_message)
kb_name = "kb_" + topic
USE KB kb_name
```
## How It Works
1. **Collection Loading**: Connects to Qdrant vector database
2. **Index Verification**: Checks collection exists and is indexed
3. **Session Association**: Links KB to current user session
4. **Context Building**: Makes documents available for search
5. **Memory Management**: Maintains list of active KBs
## Technical Details
When `USE KB` is called:
1. Checks if KB exists in Qdrant
2. Verifies user has access permissions
3. Loads collection metadata
4. Adds to session's active KB list
5. Updates search context
## Limitations
- Maximum 10 KBs per session
- KB name must exist in Qdrant
- Case-sensitive KB names
- Use `CLEAR KB` to unload specific KB
- Session-scoped (not persistent)
## Error Handling
```basic
TRY
USE KB "special-docs"
TALK "Knowledge base loaded successfully"
CATCH "kb_not_found"
TALK "That knowledge base doesn't exist"
USE KB "default-docs" ' Fallback
CATCH "kb_error"
LOG "Failed to load KB"
TALK "Having trouble accessing documentation"
END TRY
```
## Performance
- Lazy loading - documents fetched on demand
- Metadata cached in session
- Vector indices remain in Qdrant
- No document duplication in memory
## Best Practices
1. **Load Early**: Load KBs at conversation start
2. **Relevant KBs Only**: Don't load unnecessary collections
3. **Clear When Done**: Use `CLEAR KB` to free resources
4. **Handle Missing KBs**: Always have fallback logic
5. **Name Conventions**: Use descriptive, consistent names
## KB Management
### Check Available KBs
```basic
available = LIST_KBS()
FOR EACH kb IN available
TALK "Available: " + kb.name + " (" + kb.doc_count + " docs)"
NEXT
```
### Active KBs in Session
```basic
active = GET_ACTIVE_KBS()
TALK "Currently loaded: " + JOIN(active, ", ")
```
## Related Keywords
- **[CLEAR KB](./keyword-clear-kb.md)**: Unload knowledge bases
- **[ADD WEBSITE](./keyword-add-website.md)**: Create KB from website
- **[LLM](./keyword-llm.md)**: Use KB context in responses
- **[FIND](./keyword-find.md)**: Search within loaded KBs
## Advanced Usage
### KB Information
```basic
kb_info = GET_KB_INFO("product-docs")
TALK "KB contains " + kb_info.doc_count + " documents"
TALK "Last updated: " + kb_info.update_date
```
### Language-Specific KBs
```basic
language = GET_USER_LANGUAGE()
USE KB "docs_" + language ' docs_en, docs_es, docs_fr
```
### Filtered Search
```basic
USE KB "all-products"
' Search only recent products
results = FIND_WITH_FILTER "wireless", "year >= 2023"
```
## Vector Database Integration
Knowledge bases are stored as Qdrant collections:
- Each document is embedded as vectors
- Semantic similarity search enabled
- Metadata filtering supported
- Fast retrieval via HNSW index
## Creating Knowledge Bases
KBs are typically created through:
- `.gbkb` packages in bot folders
- `ADD WEBSITE` command for web content
- Direct Qdrant collection creation
- Import from external sources
## Implementation
Located in `src/basic/keywords/use_kb.rs`
The implementation:
- Validates KB existence in Qdrant
- Manages session KB registry
- Handles concurrent KB access
- Provides search context to LLM

View file

@ -1,38 +1,99 @@
# USE_TOOL Keyword
# USE TOOL
**Syntax**
```
USE_TOOL "tool-path.bas"
```
**Parameters**
- `"tool-path.bas"` Relative path to a `.bas` file inside the `.gbdialog` package (e.g., `enrollment.bas`).
**Description**
`USE_TOOL` compiles the specified BASIC script and registers it as a tool for the current session. The compiled tool becomes available for use in the same conversation, allowing its keywords to be invoked.
The keyword performs the following steps:
1. Extracts the tool name from the provided path (removing the `.bas` extension and any leading `.gbdialog/` prefix).
2. Validates that the tool name is not empty.
3. Spawns an asynchronous task that:
- Checks that the tool exists and is active for the bot in the `basic_tools` table.
- Inserts a row into `session_tool_associations` linking the tool to the current session (or does nothing if the association already exists).
4. Returns a success message indicating the tool is now available, or an error if the tool cannot be found or the database operation fails.
**Example**
## Syntax
```basic
USE_TOOL "enrollment.bas"
TALK "Enrollment tool added. You can now use ENROLL command."
USE TOOL tool-name
```
After execution, the `enrollment.bas` script is compiled and its keywords become callable in the current dialog.
## Parameters
**Implementation Notes**
| Parameter | Type | Description |
|-----------|------|-------------|
| tool-name | String | Name of the tool to load (without .bas extension) |
- The operation runs in a separate thread with its own Tokio runtime to avoid blocking the main engine.
- Errors are logged and propagated as runtime errors in the BASIC script.
## Description
Loads a tool definition and makes it available to the LLM for the current session. Tools extend the bot's capabilities with specific functions like calculations, API calls, or data processing.
## Examples
### Basic Usage
```basic
' Load weather tool
USE TOOL "weather"
' Now LLM can use weather functions
answer = LLM "What's the weather in Tokyo?"
TALK answer
```
### Multiple Tools
```basic
' Load several tools
USE TOOL "calculator"
USE TOOL "translator"
USE TOOL "date-time"
' LLM has access to all loaded tools
response = LLM "Calculate 15% tip on $45.80 and translate to Spanish"
TALK response
```
### Conditional Loading
```basic
user_type = GET "user_type"
IF user_type = "admin" THEN
USE TOOL "admin-functions"
USE TOOL "database-query"
ELSE
USE TOOL "basic-search"
END IF
```
### With Error Handling
```basic
tool_needed = "advanced-analytics"
IF FILE EXISTS tool_needed + ".bas" THEN
USE TOOL tool_needed
TALK "Analytics tool loaded"
ELSE
TALK "Advanced features not available"
END IF
```
## Tool Definition Format
Tools are defined as BASIC scripts with PARAM declarations:
```basic
' weather.bas
PARAM location AS string LIKE "Tokyo" DESCRIPTION "City name"
DESCRIPTION "Get current weather for a location"
' Tool logic here
temp = GET_TEMPERATURE(location)
conditions = GET_CONDITIONS(location)
result = location + ": " + temp + "°, " + conditions
RETURN result
```
## Notes
- Tools remain active for the entire session
- Use CLEAR TOOLS to remove all loaded tools
- Tool names should be descriptive
- Tools are loaded from the .gbdialog/tools/ directory
- Maximum 10 tools can be active simultaneously
## Related
- [CLEAR TOOLS](./keyword-clear-tools.md)
- [Tool Definition](../chapter-08/tool-definition.md)
- [PARAM Declaration](../chapter-08/param-declaration.md)

View file

@ -1,32 +0,0 @@
# WEBSITE_OF Keyword
**Syntax**
```
WEBSITE_OF "search-term"
```
**Parameters**
- `"search-term"` The term to search for using a headless browser (e.g., a query string).
**Description**
`WEBSITE_OF` performs a web search for the given term using a headless Chromium instance (via the `headless_chrome` crate). It navigates to DuckDuckGo, enters the search term, and extracts the first nonadvertisement result URL. The keyword returns the URL as a string, which can then be used with `ADD_WEBSITE` or other keywords.
**Example**
```basic
SET url = WEBSITE_OF "GeneralBots documentation"
ADD_WEBSITE url
TALK "Added the top result as a knowledge source."
```
The script searches for “GeneralBots documentation”, retrieves the first result URL, adds it as a website KB, and notifies the user.
**Implementation Notes**
- The keyword runs the browser actions in a separate thread with its own Tokio runtime.
- If no results are found, the keyword returns the string `"No results found"`.
- Errors during navigation or extraction are logged and cause a runtime error.
- The search is performed on DuckDuckGo to avoid reliance on proprietary APIs.

View file

@ -2,47 +2,82 @@
This section lists every BASIC keyword implemented in the GeneralBots engine. Each keyword page includes:
* **Syntax** Exact command format.
* **Parameters** Expected arguments.
* **Description** What the keyword does.
* **Example** A short snippet from the official template files.
* **Syntax** Exact command format
* **Parameters** Expected arguments
* **Description** What the keyword does
* **Example** A short snippet showing usage
The source code for each keyword lives in `src/basic/keywords/`. Only the keywords listed here exist in the system.
## Keywords
## Core Dialog Keywords
- [TALK](./keyword-talk.md)
- [HEAR](./keyword-hear.md)
- [SET_USER](./keyword-set-user.md)
- [SET_CONTEXT](./keyword-set-context.md)
- [LLM](./keyword-llm.md)
- [GET_BOT_MEMORY](./keyword-get-bot-memory.md)
- [SET_BOT_MEMORY](./keyword-set-bot-memory.md)
- [USE_KB](./keyword-use-kb.md)
- [CLEAR_KB](./keyword-clear-kb.md)
- [ADD_WEBSITE](./keyword-add-website.md)
- [USE_TOOL](./keyword-use-tool.md)
- [CLEAR_TOOLS](./keyword-clear-tools.md)
- [GET](./keyword-get.md)
- [FIND](./keyword-find.md)
- [SET](./keyword-set.md)
- [ON](./keyword-on.md)
- [SET_SCHEDULE](./keyword-set-schedule.md)
- [CREATE_SITE](./keyword-create-site.md)
- [CREATE_DRAFT](./keyword-create-draft.md)
- [PRINT](./keyword-print.md)
- [WAIT](./keyword-wait.md)
- [FORMAT](./keyword-format.md)
- [FIRST](./keyword-first.md)
- [LAST](./keyword-last.md)
- [FOR EACH ... NEXT](./keyword-for-each.md)
- [EXIT FOR](./keyword-exit-for.md)
- [ADD_MEMBER](./keyword-add-member.md)
- [ADD_SUGGESTION](./keyword-add-suggestion.md)
- [CLEAR_SUGGESTIONS](./keyword-clear-suggestions.md)
- [BOOK](./keyword-book.md)
- [CREATE_TASK](./keyword-create-task.md)
- [REMEMBER](./keyword-remember.md)
- [SAVE_FROM_UNSTRUCTURED](./keyword-save-from-unstructured.md)
- [SEND_MAIL](./keyword-send-mail.md)
- [WEATHER](./keyword-weather.md)
- [TALK](./keyword-talk.md) - Send message to user
- [HEAR](./keyword-hear.md) - Get input from user
- [WAIT](./keyword-wait.md) - Pause execution
- [PRINT](./keyword-print.md) - Debug output
## Variable & Memory
- [SET](./keyword-set.md) - Set variable value
- [GET](./keyword-get.md) - Get variable value
- [SET BOT MEMORY](./keyword-set-bot-memory.md) - Persist data
- [GET BOT MEMORY](./keyword-get-bot-memory.md) - Retrieve persisted data
## AI & Context
- [LLM](./keyword-llm.md) - Query language model
- [SET CONTEXT](./keyword-set-context.md) - Add context for LLM
- [SET USER](./keyword-set-user.md) - Set user context
## Knowledge Base
- [USE KB](./keyword-use-kb.md) - Load knowledge base
- [CLEAR KB](./keyword-clear-kb.md) - Unload knowledge base
- [ADD WEBSITE](./keyword-add-website.md) - Index website to KB
- [FIND](./keyword-find.md) - Search in KB
## Tools & Automation
- [USE TOOL](./keyword-use-tool.md) - Load tool definition
- [CLEAR TOOLS](./keyword-clear-tools.md) - Remove all tools
- [CREATE TASK](./keyword-create-task.md) - Create task
- [CREATE SITE](./keyword-create-site.md) - Generate website
- [CREATE DRAFT](./keyword-create-draft.md) - Create email draft
## UI & Interaction
- [ADD SUGGESTION](./keyword-add-suggestion.md) - Add clickable button
- [CLEAR SUGGESTIONS](./keyword-clear-suggestions.md) - Remove buttons
- [CHANGE THEME](./keyword-change-theme.md) - Switch UI theme
## Data Processing
- [FORMAT](./keyword-format.md) - Format strings
- [FIRST](./keyword-first.md) - Get first element
- [LAST](./keyword-last.md) - Get last element
- [SAVE FROM UNSTRUCTURED](./keyword-save-from-unstructured.md) - Extract structured data
## Flow Control
- [FOR EACH ... NEXT](./keyword-for-each.md) - Loop through items
- [EXIT FOR](./keyword-exit-for.md) - Exit loop early
- [ON](./keyword-on.md) - Event handler
- [SET SCHEDULE](./keyword-set-schedule.md) - Schedule execution
## Communication
- [SEND MAIL](./keyword-send-mail.md) - Send email
- [ADD MEMBER](./keyword-add-member.md) - Add group member
## Special Functions
- [BOOK](./keyword-book.md) - Book appointment
- [REMEMBER](./keyword-remember.md) - Store in memory
- [WEATHER](./keyword-weather.md) - Get weather info
## Notes
- Keywords are case-insensitive (TALK = talk = Talk)
- String parameters can use double quotes or single quotes
- Comments start with REM or '
- Line continuation uses underscore (_)

View file

@ -1,156 +0,0 @@
# Real BASIC Keyword Examples in GeneralBots
This section provides **authentic examples** of BASIC commands implemented in the GeneralBots system.
All examples are derived directly from the source code under `src/basic/keywords/`.
---
## Website Knowledge Base
### `ADD_WEBSITE`
Registers and indexes a website into the bots knowledge base.
```basic
ADD_WEBSITE "https://example.com"
```
**Description:**
Crawls the specified website, extracts text content, and stores it in a Qdrant vector database for semantic search.
If the `web_automation` feature is disabled, the command validates the URL format only.
---
## Knowledge Base Management
### `SET_KB`
Sets the active knowledge base for the current user session.
```basic
SET_KB "marketing_data"
```
**Description:**
Links the bots context to a specific KB collection, enabling focused queries and responses.
### `USE_KB`
Adds a new knowledge base collection.
```basic
USE_KB "customer_feedback"
```
**Description:**
Creates a new KB collection in Qdrant and prepares it for document indexing.
---
## Communication
### `HEAR_TALK`
Handles conversational input and output between the bot and user.
```basic
HEAR_TALK "Hello, bot!"
```
**Description:**
Triggers the bots response pipeline, processing user input and generating replies using the active LLM model.
### `PRINT`
Outputs text or variable content to the console or chat.
```basic
PRINT "Task completed successfully."
```
**Description:**
Displays messages or results during script execution.
---
## Context and Tools
### `SET_CONTEXT`
Defines the current operational context for the bot.
```basic
SET_CONTEXT "sales_mode"
```
**Description:**
Switches the bots internal logic to a specific context, affecting how commands are interpreted.
### `USE_TOOL`
Registers a new tool for automation.
```basic
USE_TOOL "email_sender"
```
**Description:**
Adds a tool to the bots environment, enabling extended functionality such as sending emails or processing files.
### `REMOVE_TOOL`
Removes a previously registered tool.
```basic
REMOVE_TOOL "email_sender"
```
**Description:**
Unregisters a tool from the bots active environment.
---
## Scheduling and User Management
### `SET_SCHEDULE`
Defines a scheduled task for automation.
```basic
SET_SCHEDULE "daily_report"
```
**Description:**
Creates a recurring automation trigger based on time or event conditions.
### `SET_USER`
Sets the active user context.
```basic
SET_USER "john_doe"
```
**Description:**
Associates the current session with a specific user identity.
---
## Utility Commands
### `WAIT`
Pauses execution for a specified duration.
```basic
WAIT 5
```
**Description:**
Delays script execution for 5 seconds.
### `FIND`
Searches for data or keywords within the current context.
```basic
FIND "project_status"
```
**Description:**
Queries the bots memory or KB for matching entries.
---
## Summary
All examples above are **real commands** implemented in the GeneralBots source code.
They demonstrate how BASIC syntax integrates with Rust-based logic to perform automation, data management, and conversational tasks.

View file

@ -1,21 +0,0 @@
# auth.bas
```basic
REM Simple authentication flow
SET attempts = 0
LABEL auth_loop
HEAR password
IF password = "secret123" THEN
TALK "Authentication successful."
ELSE
SET attempts = attempts + 1
IF attempts >= 3 THEN
TALK "Too many attempts. Goodbye."
EXIT
ENDIF
TALK "Incorrect password. Try again."
GOTO auth_loop
ENDIF
```
This template demonstrates a basic password check with a limited number of attempts. It uses the `HEAR`, `TALK`, `SET`, `IF`, `ELSE`, `GOTO`, and `EXIT` keywords to manage the dialog flow.

View file

@ -56,7 +56,7 @@ Common schedule patterns:
text = GET "announcements.gbkb/news/news.pdf"
```
The `GET` keyword retrieves files from the bot's knowledge base stored in MinIO. The path is relative to the bot's bucket.
The `GET` keyword retrieves files from the bot's knowledge base stored in drive storage. The path is relative to the bot's bucket.
### Generating Summaries with LLM

View file

@ -23,8 +23,8 @@ BotServer
│ └── Event Bus
├── Storage Layer
│ ├── PostgreSQL
│ ├── MinIO/S3
│ ├── Redis Cache
│ ├── Drive (S3-compatible)
│ ├── Cache (Valkey)
│ └── Qdrant Vector DB
└── Services Layer
├── Authentication
@ -287,14 +287,14 @@ Web scraping and automation:
- Conversation history
- System metadata
#### Object Storage (MinIO/S3)
#### Object Storage (Drive)
- File uploads
- Document storage
- Media files
- Backups
- Logs
#### Cache Layer (Redis)
#### Cache Layer
- Session cache
- Frequently accessed data
- Rate limiting
@ -336,8 +336,8 @@ Web scraping and automation:
### Container Structure
- Main application container
- PostgreSQL database
- MinIO storage
- Redis cache
- Drive storage (S3-compatible)
- Cache (Valkey)
- Qdrant vector DB
- Nginx reverse proxy
@ -464,9 +464,3 @@ Web scraping and automation:
- Elastic scaling
- Global CDN
### Feature Roadmap
- GraphQL API
- Real-time collaboration
- Advanced analytics
- Machine learning pipeline
- Blockchain integration

View file

@ -457,32 +457,42 @@ Remove all build artifacts:
cargo clean
```
## Docker Build
## LXC Build
Build inside Docker container:
```dockerfile
FROM rust:1.75-slim as builder
RUN apt-get update && apt-get install -y \
pkg-config libssl-dev libpq-dev cmake
WORKDIR /app
COPY . .
RUN cargo build --release --no-default-features
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y libpq5 ca-certificates
COPY --from=builder /app/target/release/botserver /usr/local/bin/
CMD ["botserver"]
```
Build Docker image:
Build inside LXC container:
```bash
docker build -t botserver:latest .
# Create build container
lxc-create -n botserver-build -t download -- -d ubuntu -r jammy -a amd64
# Configure container with build resources
cat >> /var/lib/lxc/botserver-build/config << EOF
lxc.cgroup2.memory.max = 4G
lxc.cgroup2.cpu.max = 400000 100000
EOF
# Start container
lxc-start -n botserver-build
# Install build dependencies
lxc-attach -n botserver-build -- bash -c "
apt-get update
apt-get install -y build-essential pkg-config libssl-dev libpq-dev cmake curl git
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
source \$HOME/.cargo/env
"
# Build BotServer
lxc-attach -n botserver-build -- bash -c "
git clone https://github.com/GeneralBots/BotServer /build
cd /build
source \$HOME/.cargo/env
cargo build --release --no-default-features
"
# Copy binary from container
lxc-attach -n botserver-build -- cat /build/target/release/botserver > /usr/local/bin/botserver
chmod +x /usr/local/bin/botserver
```
## Installation

View file

@ -11,7 +11,7 @@ LXC is a lightweight container technology that runs on Linux:
- **Isolation** - Separate process trees, networking, filesystems
- **Resource control** - CPU, memory, and I/O limits
BotServer uses LXC to run PostgreSQL, MinIO, and Valkey in isolated containers.
BotServer uses LXC to run PostgreSQL, drive, and cache in isolated containers.
## Automatic Container Setup
@ -40,8 +40,8 @@ Each component runs in a dedicated container:
```
{tenant}-tables → PostgreSQL database
{tenant}-drive → MinIO object storage
{tenant}-cache → Valkey cache
{tenant}-drive → Drive (S3-compatible object storage)
{tenant}-cache → Cache (Valkey)
{tenant}-llm → LLM server (optional)
{tenant}-email → Stalwart mail (optional)
```
@ -74,9 +74,9 @@ Container ports are mapped to localhost:
```
Container: 5432 → Host: 5432 (PostgreSQL)
Container: 9000 → Host: 9000 (MinIO API)
Container: 9001 → Host: 9001 (MinIO Console)
Container: 6379 → Host: 6379 (Valkey)
Container: 9000 → Host: 9000 (Drive API)
Container: 9001 → Host: 9001 (Drive Console)
Container: 6379 → Host: 6379 (Cache)
```
Access services on localhost as if they were running natively!
@ -106,10 +106,10 @@ Output:
# PostgreSQL container
lxc exec default-tables -- psql -U gbuser botserver
# MinIO container
# Drive container
lxc exec default-drive -- mc admin info local
# Valkey container
# Cache container
lxc exec default-cache -- valkey-cli ping
```

View file

@ -48,7 +48,7 @@ The following modules are exported in `src/lib.rs` and comprise the core functio
The following directories exist in `src/` but are either internal implementations or not fully integrated:
- **`api/`** - Contains `api/drive` subdirectory with drive-related API code
- **`drive/`** - MinIO/S3 integration and vector database (`vectordb.rs`)
- **`drive/`** - Drive (S3-compatible) integration and vector database (`vectordb.rs`)
- **`ui/`** - UI-related modules (`drive.rs`, `stream.rs`, `sync.rs`, `local-sync.rs`)
- **`ui_tree/`** - UI tree structure (used in main.rs but not exported in lib.rs)
- **`prompt_manager/`** - Prompt library storage (not a Rust module, contains `prompts.csv`)
@ -61,9 +61,9 @@ All dependencies are managed through a single `Cargo.toml` at the project root.
- **Web Framework**: `axum`, `tower`, `tower-http`
- **Async Runtime**: `tokio`
- **Database**: `diesel` (PostgreSQL), `redis` (cache)
- **Database**: `diesel` (PostgreSQL), `redis` (cache component client)
- **AI/ML**: `qdrant-client` (vector DB, optional feature)
- **Storage**: `aws-sdk-s3` (MinIO/S3 compatible)
- **Storage**: `aws-sdk-s3` (drive/S3 compatible)
- **Scripting**: `rhai` (BASIC-like language runtime)
- **Security**: `argon2` (password hashing), `aes-gcm` (encryption)
- **Desktop**: `tauri` (optional desktop feature)

View file

@ -199,7 +199,7 @@ Keywords have access to:
1. **AppState**: Application-wide state including:
- Database connection pool (`state.conn`)
- Drive client for MinIO (`state.drive`)
- Drive client for S3-compatible storage (`state.drive`)
- Cache client (`state.cache`)
- Configuration (`state.config`)
- LLM provider (`state.llm_provider`)

View file

@ -125,11 +125,11 @@ BotServer currently uses these major dependencies:
- `diesel` - ORM for PostgreSQL
- `diesel_migrations` - Database migration management
- `r2d2` - Connection pooling
- `redis` - Redis/Valkey cache client
- `redis` - Cache client (Valkey/Redis-compatible)
### Storage
- `aws-config` - AWS SDK configuration
- `aws-sdk-s3` - S3-compatible storage (MinIO)
- `aws-sdk-s3` - S3-compatible storage (drive component)
- `qdrant-client` - Vector database (optional)
### Security

View file

@ -142,7 +142,7 @@ The `nvidia` module provides GPU acceleration support:
The `bootstrap` module handles system initialization:
- **Component Installation**: Install required components (PostgreSQL, Redis, MinIO)
- **Component Installation**: Install required components (PostgreSQL, cache, drive)
- **Database Setup**: Create schemas and apply migrations
- **Credential Generation**: Generate secure passwords for services
- **Environment Configuration**: Write `.env` files
@ -152,7 +152,7 @@ Key responsibilities:
- Detect installation mode (local vs container)
- Install and start system components
- Initialize database with migrations
- Configure MinIO/S3 storage
- Configure drive (S3-compatible) storage
- Create default bots from templates
### Package Manager (`package_manager`)
@ -166,8 +166,8 @@ The `package_manager` module manages component installation:
Components managed:
- `tables` - PostgreSQL database
- `cache` - Redis/Valkey cache
- `drive` - MinIO object storage
- `cache` - Valkey cache
- `drive` - S3-compatible object storage
- `llm` - Local LLM server
- `email` - Email server
- `proxy` - Reverse proxy
@ -259,7 +259,7 @@ The `file` module processes various file types:
- **PDF Extraction**: Extract text from PDFs
- **Document Parsing**: Parse various document formats
- **File Upload**: Handle multipart file uploads
- **Storage Integration**: Save files to MinIO
- **Storage Integration**: Save files to drive storage
### Meeting Integration (`meet`)
@ -276,7 +276,7 @@ The `meet` module integrates with LiveKit for video conferencing:
The `drive` module provides S3-compatible object storage:
- **MinIO Integration**: AWS SDK S3 client
- **Drive Integration**: AWS SDK S3 client
- **Bucket Management**: Create and manage buckets
- **Object Operations**: Upload, download, delete objects
- **Vector Database**: Qdrant integration for semantic search

View file

@ -1,517 +1 @@
# 🏢 SMB Deployment Guide - Pragmatic BotServer Implementation
## Overview
This guide provides a **practical, cost-effective deployment** of BotServer for Small and Medium Businesses (SMBs), focusing on real-world use cases and pragmatic solutions without enterprise complexity.
## 📊 SMB Profile
**Target Company**: 50-500 employees
**Budget**: $500-5000/month for infrastructure
**IT Team**: 1-5 people
**Primary Needs**: Customer support, internal automation, knowledge management
## 🎯 Quick Start for SMBs
### 1. Single Server Deployment
```bash
# Simple all-in-one deployment for SMBs
# Runs on a single $40/month VPS (4 CPU, 8GB RAM)
# Clone and setup
git clone https://github.com/GeneralBots/BotServer
cd BotServer
# Configure for SMB (minimal features)
cat > .env << EOF
# Core Configuration
BOTSERVER_MODE=production
BOTSERVER_PORT=3000
DATABASE_URL=postgres://botserver:password@localhost/botserver
# Simple Authentication (no Zitadel complexity)
JWT_SECRET=$(openssl rand -hex 32)
ADMIN_EMAIL=admin@company.com
ADMIN_PASSWORD=ChangeMeNow123!
# OpenAI for simplicity (no self-hosted LLMs)
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-3.5-turbo # Cost-effective
# Basic Storage (local, no S3 needed initially)
STORAGE_TYPE=local
STORAGE_PATH=/var/botserver/storage
# Email Integration (existing company email)
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=bot@company.com
SMTP_PASSWORD=app-specific-password
EOF
# Build and run
cargo build --release --no-default-features --features email
./target/release/botserver
```
### 2. Docker Deployment (Recommended)
```yaml
# docker-compose.yml for SMB deployment
version: '3.8'
services:
botserver:
image: pragmatismo/botserver:latest
ports:
- "80:3000"
- "443:3000"
environment:
- DATABASE_URL=postgres://postgres:password@db:5432/botserver
- REDIS_URL=redis://redis:6379
volumes:
- ./data:/var/botserver/data
- ./certs:/var/botserver/certs
depends_on:
- db
- redis
restart: always
db:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: botserver
volumes:
- postgres_data:/var/lib/postgresql/data
restart: always
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
restart: always
# Optional: Simple backup solution
backup:
image: postgres:15-alpine
volumes:
- ./backups:/backups
command: |
sh -c 'while true; do
PGPASSWORD=password pg_dump -h db -U postgres botserver > /backups/backup_$$(date +%Y%m%d_%H%M%S).sql
find /backups -name "*.sql" -mtime +7 -delete
sleep 86400
done'
depends_on:
- db
volumes:
postgres_data:
redis_data:
```
## 💼 Common SMB Use Cases
### 1. Customer Support Bot
```typescript
// work/support/support.gbdialog
START_DIALOG support_flow
// Greeting and triage
HEAR customer_message
SET category = CLASSIFY(customer_message, ["billing", "technical", "general"])
IF category == "billing"
USE_KB "billing_faqs"
TALK "I'll help you with your billing question."
// Check if answer exists in KB
SET answer = FIND_IN_KB(customer_message)
IF answer
TALK answer
TALK "Did this answer your question?"
HEAR confirmation
IF confirmation contains "no"
CREATE_TASK "Review billing question: ${customer_message}"
TALK "I've created a ticket for our billing team. Ticket #${task_id}"
END
ELSE
SEND_MAIL to: "billing@company.com", subject: "Customer inquiry", body: customer_message
TALK "I've forwarded your question to our billing team."
END
ELSE IF category == "technical"
USE_TOOL "ticket_system"
SET ticket = CREATE_TICKET(
title: customer_message,
priority: "medium",
category: "technical_support"
)
TALK "I've created ticket #${ticket.id}. Our team will respond within 4 hours."
ELSE
USE_KB "general_faqs"
TALK "Let me find that information for you..."
// Continue with general flow
END
END_DIALOG
```
### 2. HR Assistant Bot
```typescript
// work/hr/hr.gbdialog
START_DIALOG hr_assistant
// Employee self-service
HEAR request
SET topic = EXTRACT_TOPIC(request)
SWITCH topic
CASE "time_off":
USE_KB "pto_policy"
TALK "Here's our PTO policy information..."
USE_TOOL "calendar_check"
SET available_days = CHECK_PTO_BALANCE(user.email)
TALK "You have ${available_days} days available."
TALK "Would you like to submit a time-off request?"
HEAR response
IF response contains "yes"
TALK "Please provide the dates:"
HEAR dates
CREATE_TASK "PTO Request from ${user.name}: ${dates}"
SEND_MAIL to: "hr@company.com", subject: "PTO Request", body: "..."
TALK "Your request has been submitted for approval."
END
CASE "benefits":
USE_KB "benefits_guide"
TALK "I can help you with benefits information..."
CASE "payroll":
TALK "For payroll inquiries, please contact HR directly at hr@company.com"
DEFAULT:
TALK "I can help with time-off, benefits, and general HR questions."
END
END_DIALOG
```
### 3. Sales Assistant Bot
```typescript
// work/sales/sales.gbdialog
START_DIALOG sales_assistant
// Lead qualification
SET lead_data = {}
TALK "Thanks for your interest! May I have your name?"
HEAR name
SET lead_data.name = name
TALK "What's your company name?"
HEAR company
SET lead_data.company = company
TALK "What's your primary need?"
HEAR need
SET lead_data.need = need
TALK "What's your budget range?"
HEAR budget
SET lead_data.budget = budget
// Score the lead
SET score = CALCULATE_LEAD_SCORE(lead_data)
IF score > 80
// Hot lead - immediate notification
SEND_MAIL to: "sales@company.com", priority: "high", subject: "HOT LEAD: ${company}"
USE_TOOL "calendar_booking"
TALK "Based on your needs, I'd like to schedule a call with our sales team."
SET slots = GET_AVAILABLE_SLOTS("sales_team", next_2_days)
TALK "Available times: ${slots}"
HEAR selection
BOOK_MEETING(selection, lead_data)
ELSE IF score > 50
// Warm lead - nurture
USE_KB "product_info"
TALK "Let me share some relevant information about our solutions..."
ADD_TO_CRM(lead_data, status: "nurturing")
ELSE
// Cold lead - basic info
TALK "Thanks for your interest. I'll send you our product overview."
SEND_MAIL to: lead_data.email, template: "product_overview"
END
END_DIALOG
```
## 🔧 SMB Configuration Examples
### Simple Authentication (No Zitadel)
```rust
// src/auth/simple_auth.rs - Pragmatic auth for SMBs
use argon2::{Argon2, PasswordHash, PasswordHasher, PasswordVerifier};
use jsonwebtoken::{encode, decode, Header, Validation};
pub struct SimpleAuth {
users: HashMap<String, User>,
jwt_secret: String,
}
impl SimpleAuth {
pub async fn login(&self, email: &str, password: &str) -> Result<Token> {
// Simple email/password authentication
let user = self.users.get(email).ok_or("User not found")?;
// Verify password with Argon2
let parsed_hash = PasswordHash::new(&user.password_hash)?;
Argon2::default().verify_password(password.as_bytes(), &parsed_hash)?;
// Generate simple JWT
let claims = Claims {
sub: email.to_string(),
exp: (Utc::now() + Duration::hours(24)).timestamp(),
role: user.role.clone(),
};
let token = encode(&Header::default(), &claims, &self.jwt_secret)?;
Ok(Token { access_token: token })
}
pub async fn create_user(&mut self, email: &str, password: &str, role: &str) -> Result<()> {
// Simple user creation for SMBs
let salt = SaltString::generate(&mut OsRng);
let hash = Argon2::default()
.hash_password(password.as_bytes(), &salt)?
.to_string();
self.users.insert(email.to_string(), User {
email: email.to_string(),
password_hash: hash,
role: role.to_string(),
created_at: Utc::now(),
});
Ok(())
}
}
```
### Local File Storage (No S3)
```rust
// src/storage/local_storage.rs - Simple file storage for SMBs
use std::path::{Path, PathBuf};
use tokio::fs;
pub struct LocalStorage {
base_path: PathBuf,
}
impl LocalStorage {
pub async fn store(&self, key: &str, data: &[u8]) -> Result<String> {
let path = self.base_path.join(key);
// Create directory if needed
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
// Write file
fs::write(&path, data).await?;
// Return local URL
Ok(format!("/files/{}", key))
}
pub async fn retrieve(&self, key: &str) -> Result<Vec<u8>> {
let path = self.base_path.join(key);
Ok(fs::read(path).await?)
}
}
```
## 📊 Cost Breakdown for SMBs
### Monthly Costs (USD)
| Component | Basic | Standard | Premium |
|-----------|-------|----------|---------|
| **VPS/Cloud** | $20 | $40 | $100 |
| **Database** | Included | $20 | $50 |
| **OpenAI API** | $50 | $200 | $500 |
| **Email Service** | Free* | $10 | $30 |
| **Backup Storage** | $5 | $10 | $20 |
| **SSL Certificate** | Free** | Free** | $20 |
| **Domain** | $1 | $1 | $5 |
| **Total** | **$76** | **$281** | **$725** |
*Using company Gmail/Outlook
**Using Let's Encrypt
### Recommended Tiers
- **Basic** (< 50 employees): Single bot, 1000 conversations/month
- **Standard** (50-200 employees): Multiple bots, 10k conversations/month
- **Premium** (200-500 employees): Unlimited bots, 50k conversations/month
## 🚀 Migration Path
### Phase 1: Basic Bot (Month 1)
```bash
# Start with single customer support bot
- Deploy on $20/month VPS
- Use SQLite initially
- Basic email integration
- Manual KB updates
```
### Phase 2: Add Features (Month 2-3)
```bash
# Expand capabilities
- Migrate to PostgreSQL
- Add Redis for caching
- Implement ticket system
- Add more KB folders
```
### Phase 3: Scale (Month 4-6)
```bash
# Prepare for growth
- Move to $40/month VPS
- Add backup system
- Implement monitoring
- Add HR/Sales bots
```
### Phase 4: Optimize (Month 6+)
```bash
# Improve efficiency
- Add vector search
- Implement caching
- Optimize prompts
- Add analytics
```
## 🛠️ Maintenance Checklist
### Daily
- [ ] Check bot availability
- [ ] Review error logs
- [ ] Monitor API usage
### Weekly
- [ ] Update knowledge bases
- [ ] Review conversation logs
- [ ] Check disk space
- [ ] Test backup restoration
### Monthly
- [ ] Update dependencies
- [ ] Review costs
- [ ] Analyze bot performance
- [ ] User satisfaction survey
## 📈 KPIs for SMBs
### Customer Support
- **Response Time**: < 5 seconds
- **Resolution Rate**: > 70%
- **Escalation Rate**: < 30%
- **Customer Satisfaction**: > 4/5
### Cost Savings
- **Tickets Automated**: > 60%
- **Time Saved**: 20 hours/week
- **Cost per Conversation**: < $0.10
- **ROI**: > 300%
## 🔍 Monitoring Setup
### Simple Monitoring Stack
```yaml
# monitoring/docker-compose.yml
version: '3.8'
services:
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
grafana:
image: grafana/grafana:latest
ports:
- "3001:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
- GF_INSTALL_PLUGINS=redis-datasource
```
### Health Check Endpoint
```rust
// src/api/health.rs
pub async fn health_check() -> impl IntoResponse {
let status = json!({
"status": "healthy",
"timestamp": Utc::now(),
"version": env!("CARGO_PKG_VERSION"),
"uptime": get_uptime(),
"memory_usage": get_memory_usage(),
"active_sessions": get_active_sessions(),
"database": check_database_connection(),
"redis": check_redis_connection(),
});
Json(status)
}
```
## 📞 Support Resources
### Community Support
- Discord: https://discord.gg/generalbots
- Forum: https://forum.generalbots.com
- Docs: https://docs.generalbots.com
### Professional Support
- Email: support@pragmatismo.com.br
- Phone: +55 11 1234-5678
- Response Time: 24 hours (business days)
### Training Options
- Online Course: $99 (self-paced)
- Workshop: $499 (2 days, virtual)
- Onsite Training: $2999 (3 days)
## 🎓 Next Steps
1. **Start Small**: Deploy basic customer support bot
2. **Learn by Doing**: Experiment with dialogs and KBs
3. **Iterate Quickly**: Update based on user feedback
4. **Scale Gradually**: Add features as needed
5. **Join Community**: Share experiences and get help
## 📝 License Considerations
- **AGPL-3.0**: Open source, must share modifications
- **Commercial License**: Available for proprietary use
- **SMB Discount**: 50% off for companies < 100 employees
Contact sales@pragmatismo.com.br for commercial licensing.
# SMB Deployment Guide

View file

@ -8,7 +8,6 @@ language,en
theme,default.gbtheme
knowledge_base,default.gbkb
max_context_tokens,2048
answer_mode,LLM_ONLY
```
### Key Columns
@ -17,10 +16,7 @@ answer_mode,LLM_ONLY
- **theme** UI theme package (`.gbtheme`).
- **knowledge_base** Default knowledgebase package (`.gbkb`).
- **max_context_tokens** Maximum number of tokens retained in the session context.
- **answer_mode** Determines how responses are generated:
- `LLM_ONLY` Direct LLM response.
- `TOOLS_FIRST` Try tools before falling back to LLM.
- `DOCS_ONLY` Use only documentation sources.
- **max_context_tokens** Limit for the amount of context sent to the LLM.
### Editing the Configuration
The file is a simple CSV; each line is `key,value`. Comments start with `#`. After editing, restart the server to apply changes.

View file

@ -1,433 +1 @@
# Answer Modes
Configure how the bot formulates and delivers responses to users across different scenarios and contexts.
## Overview
Answer modes control the style, length, format, and approach of bot responses. Each mode is optimized for specific use cases and can be switched dynamically based on context or user preferences.
## Available Answer Modes
### Default Mode
Standard conversational responses with balanced detail:
```csv
answerMode,default
```
Characteristics:
- Natural, conversational tone
- Moderate response length
- Includes relevant context
- Friendly and approachable
- Suitable for general interactions
### Simple Mode
Concise, straightforward answers:
```csv
answerMode,simple
```
Characteristics:
- Brief, to-the-point responses
- Minimal elaboration
- Direct answers only
- No unnecessary context
- Ideal for quick queries
Example responses:
- Default: "I'd be happy to help you reset your password. First, click on the 'Forgot Password' link on the login page, then enter your email address. You'll receive a reset link within a few minutes."
- Simple: "Click 'Forgot Password' on login page. Enter email. Check inbox for reset link."
### Detailed Mode
Comprehensive, thorough explanations:
```csv
answerMode,detailed
```
Characteristics:
- Extended explanations
- Multiple examples
- Step-by-step breakdowns
- Additional context and background
- Best for complex topics
### Technical Mode
Precise, technical language for professional users:
```csv
answerMode,technical
```
Characteristics:
- Technical terminology
- Code examples when relevant
- API references
- Detailed specifications
- Assumes technical knowledge
### Educational Mode
Teaching-focused responses with explanations:
```csv
answerMode,educational
```
Characteristics:
- Explains concepts thoroughly
- Uses analogies and examples
- Breaks down complex ideas
- Includes "why" not just "how"
- Patient and encouraging tone
### Professional Mode
Formal business communication:
```csv
answerMode,professional
```
Characteristics:
- Formal language
- Business appropriate
- Structured responses
- No casual expressions
- Suitable for corporate settings
### Friendly Mode
Warm, personable interactions:
```csv
answerMode,friendly
```
Characteristics:
- Casual, warm tone
- Uses emojis appropriately
- Personal touches
- Encouraging language
- Builds rapport
## Mode Selection
### Static Configuration
Set a default mode in config.csv:
```csv
answerMode,professional
```
### Dynamic Switching
Change modes during conversation:
```basic
IF user_type = "developer" THEN
SET_ANSWER_MODE "technical"
ELSE IF user_type = "student" THEN
SET_ANSWER_MODE "educational"
ELSE
SET_ANSWER_MODE "default"
END IF
```
### Context-Based Selection
Automatically adjust based on query:
```basic
query_type = ANALYZE_QUERY(user_input)
IF query_type = "quick_fact" THEN
SET_ANSWER_MODE "simple"
ELSE IF query_type = "how_to" THEN
SET_ANSWER_MODE "detailed"
END IF
```
## Mode Customization
### Custom Answer Modes
Define custom modes for specific needs:
```csv
customAnswerModes,"support,sales,onboarding"
answerMode.support.style,"empathetic"
answerMode.support.length,"moderate"
answerMode.support.examples,"true"
```
### Mode Parameters
Fine-tune each mode:
| Parameter | Description | Values |
|-----------|-------------|---------|
| `style` | Communication style | formal, casual, technical |
| `length` | Response length | brief, moderate, extensive |
| `examples` | Include examples | true, false |
| `formatting` | Text formatting | plain, markdown, html |
| `confidence` | Show confidence level | true, false |
| `sources` | Cite sources | true, false |
## Response Formatting
### Plain Text Mode
Simple text without formatting:
```csv
answerMode,simple
responseFormat,plain
```
Output: "Your order has been confirmed. Order number: 12345"
### Markdown Mode
Rich formatting with markdown:
```csv
answerMode,detailed
responseFormat,markdown
```
Output:
```markdown
## Order Confirmation
Your order has been **successfully confirmed**.
**Order Details:**
- Order Number: `12345`
- Status: ✅ Confirmed
- Delivery: 2-3 business days
```
### Structured Mode
JSON or structured data:
```csv
answerMode,technical
responseFormat,json
```
Output:
```json
{
"status": "confirmed",
"order_id": "12345",
"delivery_estimate": "2-3 days"
}
```
## Language Adaptation
### Complexity Levels
Adjust language complexity:
```csv
answerMode,default
languageLevel,intermediate
```
Levels:
- `basic`: Simple vocabulary, short sentences
- `intermediate`: Standard language
- `advanced`: Complex vocabulary, nuanced expression
- `expert`: Domain-specific terminology
### Tone Variations
| Mode | Tone | Example |
|------|------|---------|
| Professional | Formal | "I shall process your request immediately." |
| Friendly | Warm | "Sure thing! I'll get that done for you right away! 😊" |
| Technical | Precise | "Executing request. ETA: 2.3 seconds." |
| Educational | Patient | "Let me explain how this works step by step..." |
## Use Case Examples
### Customer Support
```csv
answerMode,support
emphathy,high
solutionFocused,true
escalationAware,true
```
Responses include:
- Acknowledgment of issue
- Empathetic language
- Clear solutions
- Escalation options
### Sales Assistant
```csv
answerMode,sales
enthusiasm,high
benefitsFocused,true
objectionHandling,true
```
Responses include:
- Product benefits
- Value propositions
- Addressing concerns
- Call-to-action
### Technical Documentation
```csv
answerMode,technical
codeExamples,true
apiReferences,true
errorCodes,true
```
Responses include:
- Code snippets
- API endpoints
- Error handling
- Implementation details
### Educational Tutor
```csv
answerMode,educational
scaffolding,true
examples,multiple
encouragement,true
```
Responses include:
- Step-by-step learning
- Multiple examples
- Concept reinforcement
- Positive feedback
## Performance Considerations
### Response Time vs Quality
| Mode | Response Time | Quality | Best For |
|------|--------------|---------|----------|
| Simple | Fastest | Basic | Quick queries |
| Default | Fast | Good | General use |
| Detailed | Moderate | High | Complex topics |
| Technical | Slower | Precise | Expert users |
### Token Usage
Approximate token consumption:
- Simple: 50-100 tokens
- Default: 100-200 tokens
- Detailed: 200-500 tokens
- Educational: 300-600 tokens
## Mode Combinations
Combine modes for specific scenarios:
```csv
answerMode,professional+detailed
```
Common combinations:
- `friendly+simple`: Casual quick help
- `professional+detailed`: Business documentation
- `technical+educational`: Developer training
- `support+empathetic`: Crisis handling
## Adaptive Modes
### User Preference Learning
System learns user preferences:
```basic
IF user_history.preferred_length = "short" THEN
SET_ANSWER_MODE "simple"
ELSE IF user_history.technical_level = "high" THEN
SET_ANSWER_MODE "technical"
END IF
```
### Feedback-Based Adjustment
Adjust based on user feedback:
```basic
IF user_feedback = "too_long" THEN
SWITCH_TO_SHORTER_MODE()
ELSE IF user_feedback = "need_more_detail" THEN
SWITCH_TO_DETAILED_MODE()
END IF
```
## Testing Answer Modes
### A/B Testing
Test different modes:
```csv
abTestEnabled,true
abTestModes,"simple,detailed"
abTestSplit,50
```
### Quality Metrics
Monitor mode effectiveness:
- User satisfaction scores
- Completion rates
- Follow-up questions
- Time to resolution
- Engagement metrics
## Best Practices
1. **Match user expectations**: Technical users want precision
2. **Consider context**: Urgent issues need simple mode
3. **Be consistent**: Don't switch modes mid-conversation without reason
4. **Test thoroughly**: Each mode should be tested with real queries
5. **Monitor feedback**: Adjust modes based on user response
6. **Document choices**: Explain why specific modes are used
7. **Provide options**: Let users choose their preferred mode
## Troubleshooting
### Response Too Long
- Switch to simple mode
- Reduce max tokens
- Enable summarization
### Response Too Technical
- Use educational mode
- Add examples
- Simplify language level
### Lack of Detail
- Switch to detailed mode
- Enable examples
- Add context inclusion
### Inconsistent Tone
- Lock mode for session
- Define clear mode parameters
- Test mode transitions

View file

@ -37,12 +37,11 @@ description,Bot description here
| Key | Description | Default | Example |
|-----|-------------|---------|---------|
| `llmModel` | Model to use | `gpt-4` | `gpt-4`, `claude-3`, `llama-3` |
| `llmApiKey` | API key for LLM service | Required | `sk-...` |
| `llmEndpoint` | Custom LLM endpoint | Provider default | `https://api.openai.com/v1` |
| `llmTemperature` | Response creativity (0-1) | `0.7` | `0.3` for factual, `0.9` for creative |
| `llmMaxTokens` | Max response length | `2000` | `4000` |
| `answerMode` | Response strategy | `default` | `simple`, `detailed`, `technical` |
| `llm-model` | Model path or name | Local model | `../../../../data/llm/model.gguf` |
| `llm-key` | API key (if using external) | `none` | `sk-...` for external APIs |
| `llm-url` | LLM endpoint URL | `http://localhost:8081` | Local or external endpoint |
| `llm-cache` | Enable LLM caching | `false` | `true` |
| `llm-cache-ttl` | Cache time-to-live (seconds) | `3600` | `7200` |
### Knowledge Base
@ -56,14 +55,14 @@ description,Bot description here
| `topK` | Number of search results | `5` | `10` |
### Storage Configuration
### Server Configuration
| Key | Description | Default | Example |
|-----|-------------|---------|---------|
| `minioEndpoint` | MinIO/S3 endpoint | `localhost:9000` | `minio.example.com` |
| `minioAccessKey` | Storage access key | Required | `minioadmin` |
| `minioSecretKey` | Storage secret key | Required | `minioadmin` |
| `minioBucket` | Default bucket name | `botserver` | `my-bot-files` |
| `minioUseSsl` | Use HTTPS for MinIO | `false` | `true` |
| `server_host` | Server bind address | `0.0.0.0` | `localhost` |
| `server_port` | Server port | `8080` | `3000` |
| `sites_root` | Sites root directory | `/tmp` | `/var/www` |
| `mcp-server` | Enable MCP server | `false` | `true` |
### Database
@ -151,8 +150,8 @@ description,Bot description here
| `ocrEnabled` | Enable OCR | `false` | `true` |
| `speechEnabled` | Enable speech | `false` | `true` |
| `translationEnabled` | Enable translation | `false` | `true` |
| `cacheEnabled` | Enable Redis cache | `false` | `true` |
| `cacheUrl` | Redis URL | `redis://localhost:6379` | `redis://cache:6379` |
| `cacheEnabled` | Enable cache component | `false` | `true` |
| `cacheUrl` | Cache URL | `redis://localhost:6379` | `redis://cache:6379` |
## Environment Variable Override
@ -223,7 +222,7 @@ Changes to `config.csv` can be reloaded without restart:
- Test rate limits
### Storage Issues
- Verify MinIO is running
- Verify drive is running
- Check access credentials
- Test bucket permissions
@ -245,7 +244,6 @@ welcomeMessage,Hello! I'm here to help with any questions.
llmModel,gpt-4
llmApiKey,${LLM_API_KEY}
llmTemperature,0.3
answerMode,detailed
databaseUrl,${DATABASE_URL}
minioEndpoint,storage.example.com
minioAccessKey,${MINIO_ACCESS}

View file

@ -208,7 +208,7 @@ Maintain state within sessions:
```csv
sessionStateEnabled,true
sessionTimeout,1800
sessionStorage,redis
sessionStorage,cache
```
Stores:
@ -293,7 +293,7 @@ Cache frequently accessed context:
```csv
contextCacheEnabled,true
contextCacheProvider,redis
contextCacheProvider,cache
contextCacheTTL,300
contextCacheMaxSize,1000
```

View file

@ -1,419 +1,167 @@
# LLM Configuration
Configure Large Language Model providers, models, and parameters for optimal bot performance.
Configure Large Language Model providers for bot conversations. BotServer prioritizes local models for privacy and cost-effectiveness.
## Overview
BotServer supports multiple LLM providers with flexible configuration options. Each bot can use different models and settings based on requirements for performance, cost, and capabilities.
BotServer supports both local models (GGUF format) and cloud APIs. The default configuration uses local models running on your hardware.
## Supported Providers
## Local Models (Default)
### OpenAI
### Configuration
The most popular provider with GPT models:
From `default.gbai/default.gbot/config.csv`:
```csv
llmProvider,openai
llmModel,gpt-4
llmApiKey,sk-...
llmEndpoint,https://api.openai.com/v1
llm-key,none
llm-url,http://localhost:8081
llm-model,../../../../data/llm/DeepSeek-R1-Distill-Qwen-1.5B-Q3_K_M.gguf
```
Available models:
- `gpt-4` - Most capable, higher cost
- `gpt-4-turbo` - Faster, more affordable GPT-4
- `gpt-3.5-turbo` - Fast and cost-effective
- `gpt-3.5-turbo-16k` - Extended context window
### Anthropic (Claude)
Advanced models with strong reasoning:
### LLM Server Settings
```csv
llmProvider,anthropic
llmModel,claude-3-opus-20240229
llmApiKey,sk-ant-...
llmEndpoint,https://api.anthropic.com
llm-server,false
llm-server-path,botserver-stack/bin/llm/build/bin
llm-server-host,0.0.0.0
llm-server-port,8081
llm-server-gpu-layers,0
llm-server-ctx-size,4096
llm-server-n-predict,1024
llm-server-parallel,6
llm-server-cont-batching,true
```
Available models:
- `claude-3-opus` - Most capable Claude model
- `claude-3-sonnet` - Balanced performance
- `claude-3-haiku` - Fast and efficient
- `claude-2.1` - Previous generation
### Supported Local Models
### Google (Gemini)
- **DeepSeek-R1-Distill-Qwen** - Efficient reasoning model
- **Llama-3** - Open source, high quality
- **Mistral** - Fast and capable
- **Phi-3** - Microsoft's small but powerful model
- **Qwen** - Multilingual support
Google's multimodal AI models:
### GPU Acceleration
```csv
llmProvider,google
llmModel,gemini-pro
llmApiKey,AIza...
llmProject,my-project-id
llm-server-gpu-layers,33 # Number of layers to offload to GPU
```
Available models:
- `gemini-pro` - Text generation
- `gemini-pro-vision` - Multimodal (text + images)
- `gemini-ultra` - Most advanced (limited access)
Set to 0 for CPU-only operation.
### Local Models
## Embeddings Configuration
Self-hosted open-source models:
For semantic search and vector operations:
```csv
llmProvider,local
llmModel,llama-3-70b
llmEndpoint,http://localhost:8000
embedding-url,http://localhost:8082
embedding-model,../../../../data/llm/bge-small-en-v1.5-f32.gguf
```
Supported local models:
- Llama 3 (8B, 70B)
- Mistral (7B, 8x7B)
- Falcon (7B, 40B)
- Vicuna
- Alpaca
## Caching Configuration
Reduce latency and costs with intelligent caching:
```csv
llm-cache,true
llm-cache-ttl,3600
llm-cache-semantic,true
llm-cache-threshold,0.95
```
## Cloud Providers (Optional)
### External API Configuration
For cloud LLM services, configure:
```csv
llm-key,your-api-key
llm-url,https://api.provider.com/v1
llm-model,model-name
```
### Provider Examples
| Provider | URL | Model Examples |
|----------|-----|----------------|
| Local | http://localhost:8081 | GGUF models |
| API Compatible | Various | Various models |
| Custom | Your endpoint | Your models |
## Performance Tuning
### Context Size
```csv
llm-server-ctx-size,4096 # Maximum context window
prompt-compact,4 # Compact after N exchanges
```
### Parallel Processing
```csv
llm-server-parallel,6 # Concurrent requests
llm-server-cont-batching,true # Continuous batching
```
### Memory Settings
```csv
llm-server-mlock,false # Lock model in memory
llm-server-no-mmap,false # Disable memory mapping
```
## Model Selection Guide
### By Use Case
| Use Case | Recommended Model | Configuration |
|----------|------------------|---------------|
| General chat | DeepSeek-R1-Distill | Default config |
| Code assistance | Qwen-Coder | Increase context |
| Multilingual | Qwen-Multilingual | Add language params |
| Fast responses | Phi-3-mini | Reduce predict tokens |
| High accuracy | Llama-3-70B | Increase GPU layers |
| Use Case | Recommended Model | Reasoning |
|----------|------------------|-----------|
| Customer Support | gpt-3.5-turbo | Fast, cost-effective, good quality |
| Technical Documentation | gpt-4 | Accurate, detailed responses |
| Creative Writing | claude-3-opus | Strong creative capabilities |
| Code Generation | gpt-4 | Best code understanding |
| Multilingual | gemini-pro | Excellent language support |
| Privacy-Sensitive | Local Llama 3 | Data stays on-premise |
| High Volume | gpt-3.5-turbo | Lowest cost per token |
## Monitoring
### By Requirements
Check LLM server status:
**Need accuracy?** → GPT-4 or Claude-3-opus
**Need speed?** → GPT-3.5-turbo or Claude-3-haiku
**Need low cost?** → Local models or GPT-3.5
**Need privacy?** → Local models only
**Need vision?** → Gemini-pro-vision or GPT-4V
## Temperature Settings
Temperature controls response creativity and randomness:
```csv
llmTemperature,0.7
```bash
curl http://localhost:8081/health
```
### Temperature Guide
View model information:
| Value | Use Case | Behavior |
|-------|----------|----------|
| 0.0 | Factual Q&A | Deterministic, same output |
| 0.2 | Technical docs | Very focused, minimal variation |
| 0.5 | Customer service | Balanced consistency |
| 0.7 | General chat | Natural variation (default) |
| 0.9 | Creative tasks | High creativity |
| 1.0 | Brainstorming | Maximum randomness |
### Examples by Domain
**Legal/Medical**: 0.1-0.3 (high accuracy required)
**Education**: 0.3-0.5 (clear, consistent)
**Sales/Marketing**: 0.6-0.8 (engaging, varied)
**Entertainment**: 0.8-1.0 (creative, surprising)
## Token Management
### Context Window Sizes
| Model | Max Tokens | Recommended |
|-------|------------|-------------|
| GPT-3.5 | 4,096 | 3,000 |
| GPT-3.5-16k | 16,384 | 12,000 |
| GPT-4 | 8,192 | 6,000 |
| GPT-4-32k | 32,768 | 25,000 |
| Claude-3 | 200,000 | 150,000 |
| Gemini-Pro | 32,768 | 25,000 |
### Token Allocation
```csv
llmMaxTokens,2000
llmContextTokens,2000
llmResponseTokens,1000
```bash
curl http://localhost:8081/v1/models
```
Best practices:
- Reserve 25% for system prompt
- Allocate 50% for context/history
- Keep 25% for response
### Cost Optimization
Monitor and control token usage:
```csv
llmTokenLimit,1000000
llmCostLimit,100
llmRequestLimit,10000
```
Tips for reducing costs:
1. Use smaller models when possible
2. Implement response caching
3. Compress conversation history
4. Set appropriate max tokens
5. Use temperature 0 for consistent caching
## Advanced Parameters
### Sampling Parameters
Fine-tune response generation:
```csv
llmTopP,0.9
llmTopK,50
llmFrequencyPenalty,0.5
llmPresencePenalty,0.5
```
| Parameter | Effect | Range | Default |
|-----------|--------|-------|---------|
| `topP` | Nucleus sampling threshold | 0-1 | 1.0 |
| `topK` | Top tokens to consider | 1-100 | None |
| `frequencyPenalty` | Reduce repetition | -2 to 2 | 0 |
| `presencePenalty` | Encourage new topics | -2 to 2 | 0 |
### Stop Sequences
Control when generation stops:
```csv
llmStopSequences,"Human:,Assistant:,###"
```
Common stop sequences:
- Conversation markers: `"Human:", "User:", "AI:"`
- Section dividers: `"###", "---", "==="`
- Custom tokens: `"[END]", "</response>"`
## System Prompts
Define bot personality and behavior:
```csv
llmSystemPrompt,"You are a helpful customer service agent for ACME Corp. Be friendly, professional, and concise. Always verify customer information before making changes."
```
### System Prompt Templates
**Professional Assistant**:
```
You are a professional assistant. Provide accurate, helpful responses.
Be concise but thorough. Maintain a formal, respectful tone.
```
**Technical Support**:
```
You are a technical support specialist. Help users troubleshoot issues.
Ask clarifying questions. Provide step-by-step solutions.
```
**Sales Representative**:
```
You are a friendly sales representative. Help customers find products.
Be enthusiastic but not pushy. Focus on customer needs.
```
## Model Fallbacks
Configure backup models for reliability:
```csv
llmProvider,openai
llmModel,gpt-4
llmFallbackModel,gpt-3.5-turbo
llmFallbackOnError,true
llmFallbackOnRateLimit,true
```
Fallback strategies:
1. **Error fallback**: Use backup on API errors
2. **Rate limit fallback**: Switch when rate limited
3. **Cost fallback**: Use cheaper model at limits
4. **Load balancing**: Distribute across models
## Response Caching
Improve performance and reduce costs:
```csv
llmCacheEnabled,true
llmCacheTTL,3600
llmCacheKey,message_hash
llmCacheOnlyDeterministic,true
```
Cache strategies:
- Cache identical queries (temperature=0)
- Cache by semantic similarity
- Cache common questions/FAQs
- Invalidate cache on knowledge updates
## Streaming Configuration
Enable real-time response streaming:
```csv
llmStreamEnabled,true
llmStreamChunkSize,10
llmStreamTimeout,30000
```
Benefits:
- Faster perceived response time
- Better user experience
- Allows interruption
- Progressive rendering
## Error Handling
Configure error behavior:
```csv
llmRetryAttempts,3
llmRetryDelay,1000
llmTimeoutSeconds,30
llmErrorMessage,"I'm having trouble processing that. Please try again."
```
Error types and handling:
- **Timeout**: Retry with shorter prompt
- **Rate limit**: Wait and retry or fallback
- **Invalid request**: Log and return error
- **Service unavailable**: Use fallback model
## Monitoring and Logging
Track LLM performance:
```csv
llmLogRequests,true
llmLogResponses,false
llmMetricsEnabled,true
llmLatencyAlert,5000
```
Key metrics to monitor:
- Response latency
- Token usage
- Cost per conversation
- Error rates
- Cache hit rates
## Multi-Model Strategies
Use different models for different tasks:
```csv
llmModelRouting,true
llmSimpleQueries,gpt-3.5-turbo
llmComplexQueries,gpt-4
llmCreativeTasks,claude-3-opus
```
Routing logic:
1. Analyze query complexity
2. Check user tier/permissions
3. Consider cost budget
4. Route to appropriate model
## Best Practices
### Development vs Production
**Development**:
```csv
llmModel,gpt-3.5-turbo
llmLogResponses,true
llmCacheEnabled,false
llmMockMode,true
```
**Production**:
```csv
llmModel,gpt-4
llmLogResponses,false
llmCacheEnabled,true
llmMockMode,false
```
### Cost Management
1. Start with smaller models
2. Use caching aggressively
3. Implement token limits
4. Monitor usage daily
5. Set up cost alerts
6. Use fallback models
7. Compress contexts
### Performance Optimization
1. Enable streaming for long responses
2. Use appropriate temperature
3. Set reasonable max tokens
4. Implement response caching
5. Use connection pooling
6. Consider edge deployment
### Security
1. Never commit API keys
2. Use environment variables
3. Rotate keys regularly
4. Implement rate limiting
5. Validate all inputs
6. Sanitize outputs
7. Log security events
## Troubleshooting
### Common Issues
### Model Not Loading
**Slow responses**: Lower max tokens, enable streaming
**High costs**: Use cheaper models, enable caching
**Inconsistent output**: Lower temperature, add examples
**Rate limits**: Implement backoff, use multiple keys
**Timeouts**: Increase timeout, reduce prompt size
1. Check file path is correct
2. Verify GGUF format
3. Ensure sufficient memory
4. Check GPU drivers (if using GPU)
### Debugging
### Slow Responses
Enable debug mode:
```csv
llmDebugMode,true
llmVerboseLogging,true
llmTraceRequests,true
```
1. Reduce context size
2. Enable GPU acceleration
3. Use smaller model
4. Enable caching
## Migration Guide
### High Memory Usage
### Switching Providers
1. Use quantized models (Q4, Q5)
2. Reduce batch size
3. Enable memory mapping
4. Lower context size
1. Update `llmProvider` and `llmModel`
2. Set appropriate API key
3. Adjust token limits for new model
4. Test with sample queries
5. Update system prompts if needed
6. Monitor for behavior changes
## Best Practices
### Upgrading Models
1. Test new model in development
2. Compare outputs with current model
3. Adjust temperature/parameters
4. Update fallback configuration
5. Gradual rollout to production
6. Monitor metrics closely
1. **Start with local models** - Better privacy and no API costs
2. **Use appropriate model size** - Balance quality vs speed
3. **Enable caching** - Reduce redundant computations
4. **Monitor resources** - Watch CPU/GPU/memory usage
5. **Test different models** - Find the best fit for your use case

View file

@ -1,10 +1,10 @@
# MinIO Drive Integration
# Drive Integration
MinIO provides S3-compatible object storage for BotServer, storing bot packages, documents, and user files.
The drive component provides S3-compatible object storage for BotServer, storing bot packages, documents, and user files.
## Overview
BotServer uses MinIO as its primary storage backend for:
BotServer uses the drive component as its primary storage backend for:
- Bot packages (`.gbai` directories)
- Knowledge base documents (`.gbkb` files)
- Configuration files (`config.csv`)
@ -13,9 +13,9 @@ BotServer uses MinIO as its primary storage backend for:
## Configuration
MinIO is configured through environment variables that are automatically generated during bootstrap:
The drive is configured through environment variables that are automatically generated during bootstrap:
- `DRIVE_SERVER` - MinIO endpoint URL (default: `http://localhost:9000`)
- `DRIVE_SERVER` - Drive endpoint URL (default: `http://localhost:9000`)
- `DRIVE_ACCESSKEY` - Access key for authentication
- `DRIVE_SECRET` - Secret key for authentication
@ -48,171 +48,159 @@ announcements.gbai/ # Bucket for announcements bot
### Automatic Upload
During bootstrap, BotServer automatically:
1. Creates buckets for each bot in `templates/`
2. Uploads all bot files to their respective buckets
3. Maintains the directory structure within buckets
When deploying a bot package, BotServer automatically:
1. Creates a bucket if it doesn't exist
2. Uploads all package files
3. Maintains directory structure
4. Preserves file permissions
### File Operations
### Real-time Synchronization
The system provides file operations through the drive client:
- **Get Object**: Retrieve files from buckets
- **Put Object**: Store files in buckets
- **List Objects**: Browse bucket contents
- **Create Bucket**: Initialize new storage buckets
The bot monitors its bucket for changes:
- Configuration updates trigger automatic reload
- New knowledge base files are indexed immediately
- Deleted files are removed from the index
### Drive Monitor
The `DriveMonitor` service watches for changes in MinIO storage:
The `DriveMonitor` service watches for changes in drive storage:
- Detects configuration updates
- Monitors document additions
- Triggers re-indexing when knowledge base files change
- Broadcasts theme changes from `config.csv` updates
- Triggers bot reloads
- Syncs local cache with drive
## Integration Points
## Bootstrap Integration
### 1. Bootstrap Process
During bootstrap, BotServer:
During initialization, the bootstrap manager:
- Installs MinIO if not present
### 1. Installation
- Installs the drive binary if not present
- Configures with generated credentials
- Creates buckets for each bot template
- Uploads template files to MinIO
- Creates data directories
- Uploads template files to drive
### 2. Knowledge Base Storage
Documents in `.gbkb` directories are:
- Uploaded to MinIO buckets
Knowledge base files are:
- Uploaded to drive buckets
- Indexed for vector search
- Retrieved on-demand for context
- Cached locally for performance
### 3. GET Keyword
### 3. File Retrieval
The BASIC `GET` keyword can retrieve files from MinIO:
The BASIC `GET` keyword can retrieve files from drive:
```basic
let data = GET "documents/policy.pdf";
content = GET "knowledge.gbkb/document.pdf"
```
This retrieves files from the bot's bucket in MinIO.
This retrieves files from the bot's bucket in drive storage.
### 4. Media Handler
## Media Handling
The multimedia handler uses MinIO for:
The multimedia handler uses drive for:
- Storing uploaded images
- Saving audio recordings
- Managing video files
- Serving media content
- Serving media files
- Managing attachments
- Processing thumbnails
## Console Interface
## Console Integration
The built-in console provides a file browser for MinIO:
The built-in console provides a file browser for drive:
- Browse buckets and folders
- View and edit files
- Navigate the storage hierarchy
- Real-time file operations
### File Tree Navigation
The console file tree shows:
- 🤖 Bot packages (`.gbai` buckets)
- 📦 Other storage buckets
- 📁 Folders within buckets
- 📄 Individual files
## Performance Considerations
### Path-Style Access
BotServer uses path-style bucket access:
```
http://localhost:9000/bucket-name/object-key
/media/ # Browse uploaded media
/files/{bot}/ # Browse bot files
/download/{bot}/{file} # Download specific file
```
This is configured with `force_path_style(true)` for compatibility with MinIO.
## AWS SDK Configuration
### Connection Pooling
BotServer uses the AWS SDK S3 client configured for drive:
The AWS SDK S3 client manages connection pooling automatically for efficient operations.
```rust
let config = aws_config::from_env()
.endpoint_url(&drive_endpoint)
.region("us-east-1")
.load()
.await;
```
### Local vs Remote
While MinIO typically runs locally alongside BotServer, it can be configured to use:
- Remote MinIO instances
- AWS S3 (change endpoint URL)
- Other S3-compatible storage
This is configured with `force_path_style(true)` for compatibility with S3-compatible storage.
## Deployment Modes
### Local Installation
### Cloud Storage
Default mode where MinIO runs on the same machine:
- Binary downloaded to `{{BIN_PATH}}/minio`
While the drive typically runs locally alongside BotServer, it can be configured to use:
- Remote S3-compatible instances
- AWS S3 (change endpoint URL)
- Azure Blob Storage (with S3 compatibility)
- Google Cloud Storage (with S3 compatibility)
### Local Mode
Default mode where drive runs on the same machine:
- Binary downloaded to `{{BIN_PATH}}/drive`
- Data stored in `{{DATA_PATH}}`
- Logs written to `{{LOGS_PATH}}/minio.log`
- Logs written to `{{LOGS_PATH}}/drive.log`
### Container Mode
MinIO can run in a container with mapped volumes for persistent storage.
Drive can run in a container with mapped volumes for persistent storage.
### External Storage
Configure BotServer to use existing MinIO or S3 infrastructure by updating the drive configuration.
Configure BotServer to use existing S3-compatible infrastructure by updating the drive configuration.
## Security
### Access Control
- Credentials are generated with cryptographically secure random values
- Access keys are at least 20 characters
- Secret keys are at least 40 characters
### Network Security
- MinIO console on port 9001 (optional)
- API endpoint on port 9000
- Can be configured with TLS for production
- Access keys are generated with 32 random bytes
- Secret keys are generated with 64 random bytes
- TLS can be enabled for secure communication
- Bucket policies control access per bot
## Monitoring
### Health Checks
- Drive console on port 9001 (optional)
- API endpoint on port 9000
- Health checks via `/health/live`
- Metrics available via `/metrics`
The package manager monitors MinIO status with:
## Troubleshooting
### Check Drive Status
The package manager monitors drive status with:
```
ps -ef | grep minio | grep -v grep
ps -ef | grep drive | grep -v grep
```
### Console Access
MinIO console available at `http://localhost:9001` for:
Drive console available at `http://localhost:9001` for:
- Bucket management
- Access policy configuration
- Usage statistics
- Performance metrics
- User management
- Policy configuration
- Access logs
## Troubleshooting
## Common Issues
### Common Issues
1. **Connection Failed**: Check MinIO is running and ports are accessible
1. **Connection Failed**: Check drive is running and ports are accessible
2. **Access Denied**: Verify credentials in environment variables
3. **Bucket Not Found**: Ensure bootstrap completed successfully
3. **Bucket Not Found**: Ensure bot deployment completed successfully
4. **Upload Failed**: Check disk space and permissions
### Debug Mode
### Debug Logging
Enable trace logging to see MinIO operations:
Enable trace logging to see drive operations:
- File retrieval details
- Upload confirmations
- Bucket operations
- Error responses
- Authentication attempts
## Best Practices
1. **Regular Backups**: Back up MinIO data directory regularly
1. **Regular Backups**: Back up drive data directory regularly
2. **Monitor Disk Usage**: Ensure adequate storage space
3. **Secure Credentials**: Never commit credentials to version control
4. **Use Buckets Wisely**: One bucket per bot for isolation
5. **Clean Up**: Remove unused files to save storage space
3. **Access Control**: Use bucket policies to restrict access
4. **Versioning**: Enable object versioning for critical data
5. **Lifecycle Policies**: Configure automatic cleanup for old files

View file

@ -95,8 +95,8 @@ Bot parameters are organized into functional groups for easier management and un
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `storageProvider` | String | No | "minio" | Storage provider |
| `storageEndpoint` | String | Yes | "localhost:9000" | S3/MinIO endpoint |
| `storageProvider` | String | No | "drive" | Storage provider |
| `storageEndpoint` | String | Yes | "localhost:9000" | S3-compatible drive endpoint |
| `storageAccessKey` | String | Yes | None | Access key |
| `storageSecretKey` | String | Yes | None | Secret key |
| `storageBucket` | String | No | "botserver" | Default bucket |
@ -181,7 +181,7 @@ Bot parameters are organized into functional groups for easier management and un
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `cacheEnabled` | Boolean | No | true | Enable caching |
| `cacheProvider` | String | No | "redis" | Cache provider |
| `cacheProvider` | String | No | "cache" | Cache provider |
| `cacheUrl` | String | No | "redis://localhost:6379" | Cache URL |
| `cacheTtl` | Number | No | 3600 | Default TTL (s) |
| `cacheMaxSize` | Number | No | 100 | Max cache size (MB) |

View file

@ -16,14 +16,14 @@ The compilation process:
### 1. File Detection
The `DriveMonitor` service watches for changes in `.gbdialog` directories:
- Monitors `.bas` files in MinIO storage
- Monitors `.bas` files in drive storage
- Detects new or modified scripts
- Triggers compilation automatically
### 2. Source Processing
When a `.bas` file changes, the compiler:
- Downloads the file from MinIO
- Downloads the file from drive
- Creates a local working directory
- Invokes the `BasicCompiler` to process the script
@ -111,7 +111,7 @@ Also generates OpenAI-compatible function definitions for API compatibility:
Tools are recompiled automatically when:
- The source `.bas` file is modified
- The file's ETag changes in MinIO
- The file's ETag changes in drive storage
- A manual recompilation is triggered
## Working Directory Structure

View file

@ -1,12 +1,12 @@
# GET Keyword Integration
The `GET` keyword in BotServer provides file retrieval capabilities from both local filesystem and MinIO/S3 storage, enabling tools to access documents, data files, and other resources.
The `GET` keyword in BotServer provides file retrieval capabilities from both local filesystem and drive (S3-compatible) storage, enabling tools to access documents, data files, and other resources.
## Overview
The `GET` keyword is a fundamental BASIC command that retrieves file contents as strings. It supports:
- Local file system access (with safety checks)
- MinIO/S3 bucket retrieval
- Drive (S3-compatible) bucket retrieval
- URL fetching (HTTP/HTTPS)
- Integration with knowledge base documents
@ -30,12 +30,12 @@ let webpage = GET "https://example.com/data.json"
The GET keyword determines the source based on the path format:
1. **URL Detection**: Paths starting with `http://` or `https://`
2. **MinIO/S3 Storage**: All other paths (retrieved from bot's bucket)
2. **Drive Storage**: All other paths (retrieved from bot's bucket)
3. **Safety Validation**: Paths are checked for directory traversal attempts
### MinIO/S3 Integration
### Drive (S3-compatible) Integration
When retrieving from MinIO:
When retrieving from drive storage:
```basic
# Retrieves from: {bot-name}.gbai bucket
@ -46,7 +46,7 @@ let report = GET "reports/2024/quarterly.pdf"
```
The implementation:
1. Connects to MinIO using configured credentials
1. Connects to drive using configured credentials
2. Retrieves from the bot's dedicated bucket
3. Returns file contents as string
4. Handles binary files by converting to text
@ -145,7 +145,7 @@ let filled = REPLACE(template, "{{name}}", customer_name)
### Timeouts
- URL fetches: 30-second timeout
- MinIO operations: Network-dependent
- Drive operations: Network-dependent
- Local files: Immediate (if accessible)
### File Size Limits

View file

@ -1,4 +1,4 @@
# OpenAI Tool Format
# Tool Format
BotServer generates OpenAI-compatible function definitions from BASIC scripts, enabling integration with OpenAI's function calling API.

View file

@ -5,13 +5,12 @@ This table maps major features of GeneralBots to the chapters and keywords that
|---------|------------|------------------|
| Start server & basic chat | 01 (Run and Talk) | `TALK`, `HEAR` |
| Package system overview | 02 (About Packages) | |
| Knowledgebase management | 03 (gbkb Reference) | `USE_KB`, `SET_KB`, `ADD_WEBSITE` |
| Knowledgebase management | 03 (gbkb Reference) | `USE KB`, `SET KB`, `ADD WEBSITE` |
| UI theming | 04 (gbtheme Reference) | (CSS/HTML assets) |
| BASIC dialog scripting | 05 (gbdialog Reference) | All BASIC keywords (`TALK`, `HEAR`, `LLM`, `FORMAT`, `USE_KB`, `SET_KB`, `ADD_WEBSITE`, …) |
| Custom Rust extensions | 06 (gbapp Reference) | `USE_TOOL`, custom Rust code |
| BASIC dialog scripting | 05 (gbdialog Reference) | All BASIC keywords (`TALK`, `HEAR`, `LLM`, `FORMAT`, `USE KB`, `SET KB`, `ADD WEBSITE`, …) |
| Custom Rust extensions | 06 (gbapp Reference) | `USE TOOL`, custom Rust code |
| Bot configuration | 07 (gbot Reference) | `config.csv` fields |
| Builtin tooling | 08 (Tooling) | All keywords listed in the table |
| Answer modes & routing | 07 (gbot Reference) | `answer_mode` column |
| Semantic search & Qdrant | 03 (gbkb Reference) | `ADD_WEBSITE`, vector search |
| Email & external APIs | 08 (Tooling) | `CALL`, `CALL_ASYNC` |
| Scheduling & events | 08 (Tooling) | `SET_SCHEDULE`, `ON` |

View file

@ -81,36 +81,6 @@ All LLM providers implement:
- `get_embedding()` - Vector embeddings
- `count_tokens()` - Token counting
## Answer Modes
### Direct LLM Mode
Bot responds using only the LLM:
```
Answer Mode=direct
```
### Document Reference Mode
LLM uses knowledge base documents:
```
Answer Mode=document-ref
```
### Tool Mode
LLM can discover and call tools:
```
Answer Mode=tool
```
### Hybrid Mode
Combines documents and tools:
```
Answer Mode=hybrid
```
## Context Management
### Context Window

View file

@ -31,7 +31,7 @@ Session contains:
Sessions stored in PostgreSQL:
- `user_sessions` table
- Cached in Redis for performance
- Cached in cache component for performance
- Automatic cleanup on expiry
## Message History

View file

@ -88,11 +88,11 @@ The `file` module handles various document types:
## Object Storage
### MinIO/S3 Integration
### Drive (S3-Compatible) Integration
The `drive` module provides cloud-native storage:
- **S3-Compatible API**: Use AWS SDK with MinIO or AWS S3
- **S3-Compatible API**: Use AWS SDK with S3-compatible storage
- **Bucket Management**: Create and manage storage buckets
- **Object Operations**: Upload, download, list, delete files
- **Secure Access**: Credential-based authentication
@ -130,11 +130,11 @@ Key database tables include:
## Caching
Redis/Valkey integration via the `cache` component:
Cache integration via the Valkey component:
- **Session Caching**: Fast session state retrieval
- **Query Caching**: Cache expensive database queries
- **Rate Limiting**: Implement rate limits with Redis
- **Rate Limiting**: Implement rate limits with cache component
- **Distributed State**: Share state across multiple instances
## Automation & Scheduling
@ -175,7 +175,7 @@ The `meet` module integrates with LiveKit:
The `bootstrap` module provides automated setup:
- **Component Installation**: Install PostgreSQL, Redis, MinIO, etc.
- **Component Installation**: Install PostgreSQL, cache, drive, etc.
- **Credential Generation**: Generate secure passwords automatically
- **Database Initialization**: Apply migrations and create schema
- **Environment Configuration**: Write `.env` files with settings
@ -193,8 +193,8 @@ The `package_manager` handles component lifecycle:
Available components include:
- `tables` (PostgreSQL)
- `cache` (Redis/Valkey)
- `drive` (MinIO)
- `cache` (Valkey)
- `drive` (S3-compatible storage)
- `llm` (Local LLM server)
- `vector_db` (Qdrant)
- `email`, `proxy`, `directory`, `dns`, `meeting`, and more

View file

@ -15,7 +15,7 @@ The knowledge base system provides:
### Storage Layers
1. **MinIO/S3 Storage** - Raw document files
1. **Drive (S3-compatible) Storage** - Raw document files
2. **PostgreSQL** - Document metadata and references
3. **Qdrant** - Vector embeddings for semantic search
@ -97,7 +97,7 @@ CLEAR_KB # Clear all
### ADD_KB (Implicit)
Documents in .gbkb folders are automatically:
- Uploaded to MinIO
- Uploaded to drive storage
- Indexed into Qdrant
- Available for USE_KB
@ -105,7 +105,7 @@ Documents in .gbkb folders are automatically:
### Document Processing Pipeline
1. **Upload**: Files uploaded to MinIO bucket
1. **Upload**: Files uploaded to drive bucket
2. **Extraction**: Text extracted from documents
3. **Chunking**: Documents split into segments
4. **Embedding**: Generate vector embeddings
@ -221,7 +221,7 @@ TALK summary
### Caching
- Embedding cache for repeated queries
- Document cache in Redis
- Document cache in cache component
- Search result caching
- Metadata caching

View file

@ -6,8 +6,8 @@ BotServer uses multiple storage layers to handle different types of data, from s
Storage in BotServer is organized into:
- **PostgreSQL** - Structured data and metadata
- **MinIO/S3** - Object storage for files and documents
- **Redis/Valkey** - Session cache and temporary data
- **Drive** - S3-compatible object storage for files and documents
- **Cache (Valkey)** - Session cache and temporary data
- **Qdrant** - Vector embeddings for semantic search
- **Local filesystem** - Working directories and cache
@ -16,7 +16,7 @@ Storage in BotServer is organized into:
### Data Flow
```
User Upload → MinIO Storage → Processing → Database Metadata
User Upload → Drive Storage → Processing → Database Metadata
↓ ↓
Vector Database PostgreSQL Tables
↓ ↓
@ -54,14 +54,14 @@ Connection pool managed by Diesel:
- Connection recycling
- Timeout protection
## MinIO/S3 Object Storage
## Drive (S3-Compatible) Object Storage
### File Organization
MinIO stores unstructured data:
Drive stores unstructured data:
```
minio/
drive/
├── bot-name.gbai/ # Bot-specific bucket
│ ├── bot-name.gbdialog/ # BASIC scripts
│ ├── bot-name.gbkb/ # Knowledge base documents
@ -84,11 +84,11 @@ DRIVE_ACCESSKEY=minioadmin
DRIVE_SECRET=minioadmin
```
## Redis/Valkey Cache
## Cache (Valkey)
### Cached Data
Redis stores temporary and cached data:
Cache stores temporary and cached data:
- Session tokens
- Temporary conversation state
- API response cache
@ -111,9 +111,9 @@ temp:{key} → data (TTL: varies)
### Configuration
```bash
REDIS_URL=redis://localhost:6379
REDIS_POOL_SIZE=5
REDIS_TTL_SECONDS=86400
CACHE_URL=redis://localhost:6379
CACHE_POOL_SIZE=5
CACHE_TTL_SECONDS=86400
```
## Qdrant Vector Database
@ -172,7 +172,7 @@ botserver/
- Automated backup scripts
2. **Object Storage**
- MinIO replication
- Drive replication
- Versioning enabled
- Cross-region backup
@ -218,13 +218,13 @@ let config = GET "settings/config.json"
1. **Use appropriate storage**
- PostgreSQL for structured data
- MinIO for files
- Redis for cache
- Drive for files
- Cache (Valkey) for sessions
- Qdrant for vectors
2. **Implement caching**
- Cache frequent queries
- Use Redis for sessions
- Use cache for sessions
- Local cache for static files
3. **Batch operations**
@ -261,16 +261,16 @@ let config = GET "settings/config.json"
Monitor these metrics:
- Database size and growth
- MinIO bucket usage
- Redis memory usage
- Drive bucket usage
- Cache memory usage
- Qdrant index size
- Disk space available
### Health Checks
- Database connectivity
- MinIO availability
- Redis response time
- Drive availability
- Cache response time
- Qdrant query performance
- Disk space warnings
@ -297,8 +297,8 @@ Monitor these metrics:
1. **Regular Maintenance**
- Vacuum PostgreSQL
- Clean MinIO buckets
- Flush Redis cache
- Clean drive buckets
- Flush cache
- Reindex Qdrant
2. **Monitor Growth**
@ -320,11 +320,11 @@ Monitor these metrics:
MAX_CONNECTIONS=100
STATEMENT_TIMEOUT=30s
# MinIO
# Drive
MAX_OBJECT_SIZE=5GB
BUCKET_QUOTA=100GB
# Redis
# Cache
MAX_MEMORY=2GB
EVICTION_POLICY=allkeys-lru

View file

@ -245,7 +245,7 @@ Regular contributors may be invited to:
- Rust toolchain
- VS Code with rust-analyzer
- Git and GitHub CLI
- Docker (optional)
- LXC (optional)
### Learning

View file

@ -0,0 +1,302 @@
# IDE Extensions
BotServer provides extensions and plugins for modern code editors to enhance the development experience with BASIC scripts, bot configurations, and platform integration.
## Zed Editor (Recommended)
Zed is a high-performance, collaborative code editor built for the modern developer.
### Installation
```bash
# Install Zed
curl https://zed.dev/install.sh | sh
# Install BotServer extension
zed --install-extension botserver
```
### Features
#### Syntax Highlighting
- BASIC keywords and functions
- Configuration CSV files
- Bot package structure recognition
- Theme CSS variables
#### Language Server Protocol (LSP)
Configure in `~/.config/zed/settings.json`:
```json
{
"lsp": {
"botserver": {
"binary": {
"path": "/usr/local/bin/botserver",
"arguments": ["--lsp"]
},
"initialization_options": {
"bot": "default",
"enableDebug": true
}
}
}
}
```
#### Key Bindings
Add to `~/.config/zed/keymap.json`:
```json
{
"bindings": {
"cmd-shift-b": "botserver:run-script",
"cmd-shift-d": "botserver:deploy-bot",
"cmd-shift-l": "botserver:view-logs"
}
}
```
#### Project Settings
Create `.zed/settings.json` in your bot project:
```json
{
"file_types": {
"BASIC": ["*.bas", "*.gbdialog"],
"Config": ["*.csv", "*.gbot"]
},
"format_on_save": true,
"tab_size": 2
}
```
## Vim/Neovim Plugin
### Installation
Using vim-plug:
```vim
" ~/.vimrc or ~/.config/nvim/init.vim
Plug 'botserver/vim-botserver'
```
Using lazy.nvim:
```lua
-- ~/.config/nvim/lua/plugins/botserver.lua
return {
'botserver/nvim-botserver',
config = function()
require('botserver').setup({
server_url = 'http://localhost:8080',
default_bot = 'edu'
})
end
}
```
### Features
#### Syntax Files
```vim
" ~/.vim/syntax/basic.vim
syn keyword basicKeyword TALK HEAR SET GET LLM
syn keyword basicConditional IF THEN ELSE END
syn keyword basicRepeat FOR EACH NEXT
syn match basicComment "^REM.*$"
syn match basicComment "'.*$"
```
#### Commands
- `:BotDeploy` - Deploy current bot
- `:BotRun` - Run current script
- `:BotLogs` - View server logs
- `:BotConnect` - Connect to server
## Emacs Mode
### Installation
```elisp
;; ~/.emacs.d/init.el
(add-to-list 'load-path "~/.emacs.d/botserver-mode")
(require 'botserver-mode)
(add-to-list 'auto-mode-alist '("\\.bas\\'" . botserver-mode))
```
### Features
#### Major Mode
```elisp
(define-derived-mode botserver-mode prog-mode "BotServer"
"Major mode for editing BotServer BASIC scripts."
(setq-local comment-start "REM ")
(setq-local comment-end "")
(setq-local indent-line-function 'botserver-indent-line))
```
#### Key Bindings
- `C-c C-c` - Run current script
- `C-c C-d` - Deploy bot
- `C-c C-l` - View logs
## Sublime Text Package
### Installation
```bash
# Via Package Control
# Cmd+Shift+P -> Package Control: Install Package -> BotServer
# Manual installation
cd ~/Library/Application\ Support/Sublime\ Text/Packages
git clone https://github.com/botserver/sublime-botserver BotServer
```
### Features
- BASIC syntax highlighting
- Build system for running scripts
- Snippets for common patterns
- Project templates
## TextMate Bundle
### Installation
```bash
cd ~/Library/Application\ Support/TextMate/Bundles
git clone https://github.com/botserver/botserver.tmbundle
```
### Features
- Language grammar for BASIC
- Commands for deployment
- Tab triggers for snippets
## Language Server Protocol (LSP)
BotServer includes an LSP server that works with any LSP-compatible editor:
### Starting the LSP Server
```bash
botserver --lsp --stdio
```
### Capabilities
- Completion
- Hover documentation
- Go to definition
- Find references
- Diagnostics
- Code actions
### Configuration Example
For any LSP client:
```json
{
"command": ["botserver", "--lsp", "--stdio"],
"filetypes": ["basic", "bas"],
"rootPatterns": [".gbai", "config.csv"],
"initializationOptions": {
"bot": "default"
}
}
```
## Common Features Across All Editors
### Snippets
#### Tool Definition
```basic
PARAM ${name} AS ${type} LIKE "${example}" DESCRIPTION "${description}"
DESCRIPTION "${tool_description}"
${body}
```
#### Dialog Flow
```basic
TALK "${greeting}"
HEAR response
IF response = "${expected}" THEN
${action}
END IF
```
#### Knowledge Base Usage
```basic
USE KB "${collection}"
answer = LLM "${prompt}"
TALK answer
CLEAR KB
```
### File Associations
| Extension | File Type | Purpose |
|-----------|-----------|---------|
| `.bas` | BASIC Script | Dialog logic |
| `.gbdialog` | Dialog Package | Contains .bas files |
| `.gbkb` | Knowledge Base | Document collections |
| `.gbot` | Bot Config | Contains config.csv |
| `.gbtheme` | Theme Package | CSS themes |
| `.gbai` | Bot Package | Root container |
## Debugging Support
### Breakpoints
Set breakpoints in BASIC scripts:
```basic
TALK "Before breakpoint"
' BREAKPOINT
TALK "After breakpoint"
```
### Watch Variables
Monitor variable values during execution:
```basic
' WATCH: user_name
' WATCH: response
user_name = GET "name"
response = LLM "Hello " + user_name
```
### Step Execution
Control flow with debug commands:
- Step Over: Execute current line
- Step Into: Enter function calls
- Step Out: Exit current function
- Continue: Resume execution
## Best Practices
1. **Use Format on Save**: Keep code consistently formatted
2. **Enable Linting**: Catch errors early
3. **Configure Shortcuts**: Speed up common tasks
4. **Use Snippets**: Reduce repetitive typing
5. **Keep Extensions Updated**: Get latest features and fixes
## Troubleshooting
### LSP Not Starting
- Check botserver binary is in PATH
- Verify server is running on expected port
- Review LSP logs in editor
### Syntax Highlighting Missing
- Ensure file extensions are properly associated
- Restart editor after installing extension
- Check language mode is set correctly
### Commands Not Working
- Verify server connection settings
- Check API credentials if required
- Review editor console for errors

View file

@ -27,9 +27,9 @@ This guide covers setting up a development environment for contributing to BotSe
### Optional Components
- **MinIO**: For S3-compatible storage (auto-installed by bootstrap)
- **Redis/Valkey**: For caching (auto-installed by bootstrap)
- **Docker**: For containerized development
- **Drive**: For S3-compatible storage (auto-installed by bootstrap)
- **Cache (Valkey)**: For caching (auto-installed by bootstrap)
- **LXC**: For containerized development
## Getting Started
@ -69,8 +69,8 @@ cargo run
On first run, bootstrap will:
- Install PostgreSQL (if needed)
- Install MinIO
- Install Redis/Valkey
- Install drive (S3-compatible storage)
- Install cache (Valkey)
- Create database schema
- Upload bot templates
- Generate secure credentials
@ -199,6 +199,39 @@ diesel migration run
```
4. Update models in `src/core/shared/models.rs`
## Remote Development Setup
### SSH Configuration for Stable Connections
When developing on remote Linux servers, configure SSH for stable monitoring connections:
Edit `~/.ssh/config`:
```
Host *
ServerAliveInterval 60
ServerAliveCountMax 5
```
This configuration:
- **ServerAliveInterval 60**: Sends keepalive packets every 60 seconds
- **ServerAliveCountMax 5**: Allows up to 5 missed keepalives before disconnecting
- Prevents SSH timeouts during long compilations or debugging sessions
- Maintains stable connections for monitoring logs and services
### Remote Monitoring Tips
```bash
# Monitor BotServer logs in real-time
ssh user@server 'tail -f botserver.log'
# Watch compilation progress
ssh user@server 'cd /path/to/botserver && cargo build --release'
# Keep terminal session alive
ssh user@server 'tmux new -s botserver'
```
## Debugging
### Enable Debug Logging
@ -284,8 +317,8 @@ cargo test --all-features
- Verify DATABASE_URL is correct
- Check user permissions
2. **MinIO Connection Failed**
- Ensure MinIO is running on port 9000
2. **Drive Connection Failed**
- Ensure drive is running on port 9000
- Check DRIVE_ACCESSKEY and DRIVE_SECRET
3. **Port Already in Use**
@ -297,41 +330,56 @@ cargo test --all-features
- Clean build: `cargo clean`
- Check dependencies: `cargo tree`
## Docker Development
## LXC Development
### Using Docker Compose
### Using LXC Containers
```yaml
version: '3.8'
services:
postgres:
image: postgres:14
environment:
POSTGRES_USER: gbuser
POSTGRES_PASSWORD: password
POSTGRES_DB: botserver
ports:
- "5432:5432"
minio:
image: minio/minio
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
ports:
- "9000:9000"
- "9001:9001"
redis:
image: redis:7
ports:
- "6379:6379"
```bash
# Create development containers
lxc-create -n botserver-dev-db -t download -- -d alpine -r 3.18 -a amd64
lxc-create -n botserver-dev-drive -t download -- -d alpine -r 3.18 -a amd64
lxc-create -n botserver-dev-cache -t download -- -d alpine -r 3.18 -a amd64
# Configure PostgreSQL container
lxc-start -n botserver-dev-db
lxc-attach -n botserver-dev-db -- sh -c "
apk add postgresql14 postgresql14-client
rc-service postgresql setup
rc-service postgresql start
psql -U postgres -c \"CREATE USER gbuser WITH PASSWORD 'password';\"
psql -U postgres -c \"CREATE DATABASE botserver OWNER gbuser;\"
"
# Configure MinIO (Drive) container
lxc-start -n botserver-dev-drive
lxc-attach -n botserver-dev-drive -- sh -c "
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
MINIO_ROOT_USER=driveadmin MINIO_ROOT_PASSWORD=driveadmin ./minio server /data --console-address ':9001' &
"
# Configure Redis (Cache) container
lxc-start -n botserver-dev-cache
lxc-attach -n botserver-dev-cache -- sh -c "
apk add redis
rc-service redis start
"
# Get container IPs
DB_IP=$(lxc-info -n botserver-dev-db -iH)
DRIVE_IP=$(lxc-info -n botserver-dev-drive -iH)
CACHE_IP=$(lxc-info -n botserver-dev-cache -iH)
echo "Database: $DB_IP:5432"
echo "Drive: $DRIVE_IP:9000"
echo "Cache: $CACHE_IP:6379"
```
Run services:
Start all services:
```bash
docker-compose up -d
lxc-start -n botserver-dev-db
lxc-start -n botserver-dev-drive
lxc-start -n botserver-dev-cache
```
## Contributing Guidelines

View file

@ -1,6 +1,46 @@
# Code Standards
BotServer follows Rust best practices and conventions to maintain a consistent, readable, and maintainable codebase.
BotServer follows Rust best practices with a unique approach: **all code is fully generated by LLMs** following specific prompts and patterns.
## LLM-Generated Code Policy
### Core Principle
**All source code in BotServer is generated by Large Language Models (LLMs)**. This ensures consistency, reduces human error, and leverages AI capabilities for optimal code generation.
### Important Guidelines
1. **Comments are discouraged** - Code should be self-documenting through clear naming and structure
2. **Comments may be deleted during optimization** - Do not rely on comments for critical information
3. **Documentation should be external** - Use README files and documentation chapters, not inline comments
4. **Your comments are dangerous** - They can become outdated (desacoplado) and misleading
### Why No Comments?
- LLM-generated code is consistently structured
- Function and variable names are descriptive
- External documentation is more maintainable
- Comments become stale and misleading over time
- Optimization passes may remove comments
## Development Workflow
Follow the LLM workflow defined in `/prompts/dev/platform/README.md`:
### LLM Strategy
1. **Sequential Development**: One requirement at a time with sequential commits
2. **Fallback Strategy**: After 3 attempts or 10 minutes, try different LLMs in sequence
3. **Error Handling**: Stop on unresolved errors and consult alternative LLMs
4. **Warning Removal**: Handle as last task before committing
5. **Final Validation**: Use `cargo check` with appropriate LLM
### Code Generation Rules
From `/prompts/dev/platform/botserver.md`:
- Sessions must always be retrieved by id when session_id is present
- Never suggest installing software - bootstrap handles everything
- Configuration stored in `.gbot/config` and `bot_configuration` table
## Rust Style Guide
@ -42,28 +82,21 @@ cargo clippy --fix
- **SCREAMING_SNAKE_CASE**: Constants
- **'lifetime**: Lifetime parameters
### Examples
### Self-Documenting Names
Instead of comments, use descriptive names:
```rust
// Good naming
const MAX_RETRIES: u32 = 3;
struct UserSession {
session_id: Uuid,
}
fn process_message(content: &str) -> Result<String> {
// BAD: Needs comment
fn proc(d: &str) -> Result<String> {
// Process user data
// ...
}
trait MessageHandler {
fn handle(&self, msg: &Message);
// GOOD: Self-documenting
fn process_user_registration_data(registration_form: &str) -> Result<String> {
// No comment needed
}
// Bad naming
const maxRetries: u32 = 3; // Should be SCREAMING_SNAKE_CASE
struct user_session {} // Should be PascalCase
fn ProcessMessage() {} // Should be snake_case
```
## Code Organization
@ -101,20 +134,42 @@ use super::utils;
use self::helper::*;
```
## Documentation Strategy
### External Documentation Only
```rust
// DON'T: Inline documentation comments
/// This function creates a user session
/// It takes a user_id and bot_id
/// Returns a Result with Session or Error
fn create_session(user_id: Uuid, bot_id: Uuid) -> Result<Session> {
// Implementation
}
// DO: Self-documenting code + external docs
fn create_user_session_for_bot(user_id: Uuid, bot_id: Uuid) -> Result<Session> {
// Implementation
}
// Document in chapter-10/api-reference.md instead
```
### Where to Document
1. **README.md** files for module overview
2. **Documentation chapters** for detailed explanations
3. **API references** in separate documentation files
4. **Architecture diagrams** in documentation folders
5. **Prompt files** in `/prompts/dev/` for generation patterns
## Error Handling
### Use Result Types
```rust
// Good: Explicit error handling
fn read_file(path: &str) -> Result<String, std::io::Error> {
fn read_configuration_file(path: &str) -> Result<String, std::io::Error> {
std::fs::read_to_string(path)
}
// Bad: Panic on error
fn read_file(path: &str) -> String {
std::fs::read_to_string(path).unwrap()
}
```
### Custom Error Types
@ -123,159 +178,31 @@ fn read_file(path: &str) -> String {
use thiserror::Error;
#[derive(Error, Debug)]
pub enum BotError {
#[error("Database error: {0}")]
Database(#[from] diesel::result::Error),
pub enum BotServerError {
#[error("Database connection failed: {0}")]
DatabaseConnection(#[from] diesel::result::Error),
#[error("Configuration error: {message}")]
Config { message: String },
#[error("Invalid configuration: {message}")]
InvalidConfiguration { message: String },
#[error("Network error")]
Network(#[from] reqwest::Error),
#[error("Network request failed")]
NetworkFailure(#[from] reqwest::Error),
}
```
## Documentation
### Module Documentation
```rust
//! # User Management Module
//!
//! This module handles user authentication and profile management.
//!
//! ## Example
//! ```
//! use crate::user::User;
//! let user = User::new("john", "john@example.com");
//! ```
pub mod user {
// ...
}
```
### Function Documentation
```rust
/// Creates a new user session.
///
/// # Arguments
/// * `user_id` - The unique identifier of the user
/// * `bot_id` - The bot instance identifier
///
/// # Returns
/// * `Result<Session>` - The created session or an error
///
/// # Example
/// ```
/// let session = create_session(user_id, bot_id)?;
/// ```
pub fn create_session(user_id: Uuid, bot_id: Uuid) -> Result<Session> {
// ...
}
```
## Async/Await Best Practices
### Async Functions
```rust
// Good: Necessary async
async fn fetch_data(url: &str) -> Result<String> {
let response = reqwest::get(url).await?;
Ok(response.text().await?)
}
// Bad: Unnecessary async
async fn calculate_sum(a: i32, b: i32) -> i32 {
a + b // No await needed
}
```
### Spawning Tasks
```rust
// Good: Structured concurrency
let handle = tokio::spawn(async move {
process_task().await
});
let result = handle.await??;
// Handle errors properly
match handle.await {
Ok(Ok(value)) => value,
Ok(Err(e)) => return Err(e),
Err(e) => return Err(e.into()),
}
```
## Memory Management
### Use Arc for Shared State
```rust
use std::sync::Arc;
use tokio::sync::Mutex;
pub struct AppState {
data: Arc<Mutex<HashMap<String, String>>>,
}
impl AppState {
pub async fn get(&self, key: &str) -> Option<String> {
let data = self.data.lock().await;
data.get(key).cloned()
}
}
```
### Avoid Unnecessary Cloning
```rust
// Good: Borrow when possible
fn process(data: &str) -> String {
data.to_uppercase()
}
// Bad: Unnecessary clone
fn process(data: String) -> String {
data.clone().to_uppercase() // Clone not needed
}
```
## Database Patterns
### Use Diesel Properly
```rust
// Good: Type-safe queries
use diesel::prelude::*;
let user = users::table
.filter(users::id.eq(user_id))
.first::<User>(&mut conn)?;
// Bad: Raw SQL without type safety
let user = sql_query("SELECT * FROM users WHERE id = ?")
.bind::<Text, _>(user_id)
.load(&mut conn)?;
```
## Testing Standards
### Test Naming
```rust
#[test]
fn test_user_creation_succeeds_with_valid_data() {
// Clear test name indicating what is being tested
fn user_creation_succeeds_with_valid_data() {
// Clear test name, no comments needed
}
#[test]
fn test_user_creation_fails_with_invalid_email() {
// Indicates failure condition
fn user_creation_fails_with_invalid_email() {
// Self-documenting test name
}
```
@ -309,24 +236,20 @@ mod tests {
### Never Hardcode Secrets
```rust
// Good: Environment variables
let api_key = std::env::var("API_KEY")?;
// Bad: Hardcoded secret
let api_key = "sk-1234567890abcdef";
let database_url = std::env::var("DATABASE_URL")?;
```
### Validate Input
```rust
// Good: Validate and sanitize
fn process_input(input: &str) -> Result<String> {
if input.len() > 1000 {
return Err("Input too long".into());
fn validate_and_sanitize_user_input(input: &str) -> Result<String> {
if input.len() > MAX_INPUT_LENGTH {
return Err(BotServerError::InputTooLong);
}
if !input.chars().all(|c| c.is_alphanumeric()) {
return Err("Invalid characters".into());
if !input.chars().all(char::is_alphanumeric) {
return Err(BotServerError::InvalidCharacters);
}
Ok(input.to_string())
@ -338,33 +261,52 @@ fn process_input(input: &str) -> Result<String> {
### Use Iterators
```rust
// Good: Iterator chains
let sum: i32 = numbers
let positive_doubled_sum: i32 = numbers
.iter()
.filter(|n| **n > 0)
.map(|n| n * 2)
.sum();
// Less efficient: Collecting intermediate
let filtered: Vec<_> = numbers.iter().filter(|n| **n > 0).collect();
let doubled: Vec<_> = filtered.iter().map(|n| *n * 2).collect();
let sum: i32 = doubled.iter().sum();
```
### Avoid Unnecessary Allocations
```rust
fn process_text_without_allocation(text: &str) -> String {
text.to_uppercase()
}
```
## LLM Prompt References
Key prompts for code generation are stored in `/prompts/dev/`:
- **platform/botserver.md**: Core platform rules
- **platform/add-keyword.md**: Adding new BASIC keywords
- **platform/add-model.md**: Integrating new LLM models
- **platform/fix-errors.md**: Error resolution patterns
- **basic/doc-keyword.md**: BASIC keyword documentation
## Code Review Checklist
Before submitting code:
Before submitting LLM-generated code:
- [ ] Code compiles without warnings
- [ ] All tests pass
- [ ] Code is formatted with rustfmt
- [ ] Clippy passes without warnings
- [ ] Documentation is updated
- [ ] NO inline comments (use external docs)
- [ ] Function/variable names are self-documenting
- [ ] No hardcoded secrets
- [ ] Error handling is proper
- [ ] Performance implications considered
- [ ] Security implications reviewed
- [ ] Error handling follows Result pattern
- [ ] Follows patterns from `/prompts/dev/`
## Summary
Following these standards ensures BotServer code remains consistent, maintainable, and high-quality. Always prioritize clarity and correctness over cleverness.
BotServer embraces AI-first development where:
1. **All code is LLM-generated** following consistent patterns
2. **Comments are forbidden** - code must be self-documenting
3. **Documentation lives externally** in dedicated files
4. **Prompts define patterns** in `/prompts/dev/`
5. **Optimization may delete anything** not in the actual code logic
This approach ensures consistency, maintainability, and leverages AI capabilities while avoiding the pitfalls of outdated comments and human inconsistencies.

View file

@ -60,7 +60,7 @@ Each bot has isolated:
- Bot memories
- Knowledge bases
- Configuration
- MinIO bucket
- Drive bucket
### Cross-Bot Protection
@ -95,7 +95,7 @@ Bots created during bootstrap:
1. Template found in `templates/`
2. Bot registered in database
3. Configuration loaded
4. Resources uploaded to MinIO
4. Resources uploaded to drive storage
5. Knowledge base indexed
### Activation

View file

@ -11,10 +11,10 @@ This document provides a comprehensive checklist for security and compliance req
| **Caddy** | Reverse proxy, TLS termination, web server | Apache 2.0 |
| **PostgreSQL** | Relational database | PostgreSQL License |
| **Zitadel** | Identity and access management | Apache 2.0 |
| **MinIO** | S3-compatible object storage | AGPLv3 |
| **Drive** | S3-compatible object storage | AGPLv3 |
| **Stalwart** | Mail server (SMTP/IMAP) | AGPLv3 |
| **Qdrant** | Vector database | Apache 2.0 |
| **Valkey** | In-memory cache (Redis-compatible) | BSD 3-Clause |
| **Cache (Valkey)** | In-memory cache (Redis-compatible) | BSD 3-Clause |
| **LiveKit** | Video conferencing | Apache 2.0 |
| **Ubuntu** | Operating system | Various |
@ -136,29 +136,29 @@ log_statement = 'all'
---
## Object Storage (MinIO)
## Object Storage (Drive)
| Status | Requirement | Component | Standard | Implementation |
|--------|-------------|-----------|----------|----------------|
| ✅ | Encryption at Rest | MinIO | All | Server-side encryption (SSE-S3) |
| ✅ | Encryption in Transit | MinIO | All | TLS for all connections |
| ✅ | Bucket Policies | MinIO | All | Fine-grained access control policies |
| ✅ | Object Versioning | MinIO | HIPAA | Version control for data recovery |
| ✅ | Access Logging | MinIO | All | Detailed audit logs for all operations |
| ⚠️ | Lifecycle Rules | MinIO | LGPD | Configure data retention and auto-deletion |
| ✅ | Immutable Objects | MinIO | Compliance | WORM (Write-Once-Read-Many) support |
| 🔄 | Replication | MinIO | HIPAA | Multi-site replication for DR |
| ✅ | IAM Integration | MinIO | All | Integration with Zitadel via OIDC |
| ✅ | Encryption at Rest | Drive | All | Server-side encryption (SSE-S3) |
| ✅ | Encryption in Transit | Drive | All | TLS for all connections |
| ✅ | Bucket Policies | Drive | All | Fine-grained access control policies |
| ✅ | Object Versioning | Drive | HIPAA | Version control for data recovery |
| ✅ | Access Logging | Drive | All | Detailed audit logs for all operations |
| ⚠️ | Lifecycle Rules | Drive | LGPD | Configure data retention and auto-deletion |
| ✅ | Immutable Objects | Drive | Compliance | WORM (Write-Once-Read-Many) support |
| 🔄 | Replication | Drive | HIPAA | Multi-site replication for DR |
| ✅ | IAM Integration | Drive | All | Integration with Zitadel via OIDC |
**Environment Variables**:
```bash
MINIO_ROOT_USER=admin
MINIO_ROOT_PASSWORD=SecurePassword123!
MINIO_SERVER_URL=https://minio.example.com
MINIO_BROWSER=on
MINIO_IDENTITY_OPENID_CONFIG_URL=http://localhost:8080/.well-known/openid-configuration
MINIO_IDENTITY_OPENID_CLIENT_ID=minio
MINIO_IDENTITY_OPENID_CLIENT_SECRET=secret
DRIVE_ROOT_USER=admin
DRIVE_ROOT_PASSWORD=SecurePassword123!
DRIVE_SERVER_URL=https://drive.example.com
DRIVE_BROWSER=on
DRIVE_IDENTITY_OPENID_CONFIG_URL=http://localhost:8080/.well-known/openid-configuration
DRIVE_IDENTITY_OPENID_CLIENT_ID=drive
DRIVE_IDENTITY_OPENID_CLIENT_SECRET=secret
```
**Bucket Policy Example**:
@ -373,7 +373,7 @@ Unattended-Upgrade::Automatic-Reboot-Time "03:00";
|--------|-------------|----------------|----------|
| 🔄 | Automated Backups | Daily automated backups | All |
| ✅ | Backup Encryption | AES-256 encrypted backups | All |
| ✅ | Off-site Storage | MinIO replication to secondary site | HIPAA |
| ✅ | Off-site Storage | Drive replication to secondary site | HIPAA |
| 📝 | Backup Testing | Quarterly restore tests | All |
| ✅ | Retention Policy | 90 days for full, 30 for incremental | All |
@ -387,8 +387,8 @@ pg_dump -h localhost -U postgres botserver | \
gzip | \
openssl enc -aes-256-cbc -salt -out /backup/pg_${BACKUP_DATE}.sql.gz.enc
# MinIO backup
mc mirror minio/botserver /backup/minio_${BACKUP_DATE}/
# Drive backup
mc mirror drive/botserver /backup/drive_${BACKUP_DATE}/
# Qdrant snapshot
curl -X POST "http://localhost:6333/collections/botserver/snapshots"
@ -463,7 +463,7 @@ curl -X POST "http://localhost:6333/collections/botserver/snapshots"
| ✅ | User Rights | Same as GDPR |
| ✅ | Consent | Zitadel consent management |
| 📝 | Data Protection Officer | Designate DPO |
| ⚠️ | Data Retention | Configure lifecycle policies in MinIO |
| ⚠️ | Data Retention | Configure lifecycle policies in Drive |
| ✅ | Breach Notification | Same incident response as GDPR |
---
@ -471,7 +471,7 @@ curl -X POST "http://localhost:6333/collections/botserver/snapshots"
## Implementation Priority
### High Priority (Critical for Production)
1. ✅ TLS 1.3 everywhere (Caddy, PostgreSQL, MinIO, Stalwart)
1. ✅ TLS 1.3 everywhere (Caddy, PostgreSQL, Drive, Stalwart)
2. ✅ MFA for all admin accounts (Zitadel)
3. ✅ Firewall configuration (UFW)
4. ✅ Automated security updates (unattended-upgrades)
@ -537,8 +537,8 @@ sudo dpkg-reconfigure --priority=low unattended-upgrades
sudo -u postgres psql -c "ALTER SYSTEM SET ssl = 'on';"
sudo systemctl restart postgresql
# 4. Set MinIO encryption
mc admin config set minio/ server-side-encryption-s3 on
# 4. Set Drive encryption
mc admin config set drive/ server-side-encryption-s3 on
# 5. Configure Zitadel MFA
# Via web console: Settings > Security > MFA > Require for admins

View file

@ -81,7 +81,7 @@ BotServer uses Zitadel as the primary identity provider:
- **Access Tokens**: JWT with RS256 signing
- **Refresh Tokens**: Secure random 256-bit
- **Session Tokens**: UUID v4 with Redis storage
- **Session Tokens**: UUID v4 with cache storage
- **Token Rotation**: Automatic refresh on expiry
## Encryption & Cryptography
@ -154,9 +154,9 @@ BotServer uses Zitadel as the primary identity provider:
- SSL/TLS connections enforced
```
### File Storage Security (MinIO)
### File Storage Security (Drive)
- **MinIO Configuration**:
- **Drive Configuration**:
- Bucket encryption: AES-256
- Access: Policy-based access control
- Versioning: Enabled
@ -224,7 +224,7 @@ ZITADEL_DOMAIN="https://your-instance.zitadel.cloud"
ZITADEL_CLIENT_ID="your-client-id"
ZITADEL_CLIENT_SECRET="your-client-secret"
# MinIO (Drive) configuration
# Drive configuration
MINIO_ENDPOINT="http://localhost:9000"
MINIO_ACCESS_KEY="minioadmin"
MINIO_SECRET_KEY="minioadmin"
@ -234,9 +234,9 @@ MINIO_USE_SSL=true
QDRANT_URL="http://localhost:6333"
QDRANT_API_KEY="your-api-key"
# Valkey (Cache) configuration
VALKEY_URL="redis://localhost:6379"
VALKEY_PASSWORD="your-password"
# Cache configuration
CACHE_URL="redis://localhost:6379"
CACHE_PASSWORD="your-password"
# Optional security enhancements
BOTSERVER_ENABLE_AUDIT=true
@ -354,14 +354,15 @@ app.example.com {
### Deployment
1. **Container Security**
```dockerfile
# Multi-stage build
FROM rust:1.75 as builder
# ... build steps ...
```bash
# LXC security configuration
lxc config set botserver-prod security.privileged=false
lxc config set botserver-prod security.idmap.isolated=true
lxc config set botserver-prod security.nesting=false
# Minimal runtime
FROM gcr.io/distroless/cc-debian12
USER nonroot:nonroot
# Run as non-root user
lxc exec botserver-prod -- useradd -m botuser
lxc exec botserver-prod -- su - botuser
```
2. **LXD/LXC Container Security**
@ -383,7 +384,7 @@ app.example.com {
```
# Firewall rules (UFW/iptables)
- Ingress: Only from Caddy proxy
- Egress: PostgreSQL, MinIO, Qdrant, Valkey
- Egress: PostgreSQL, Drive, Qdrant, Cache
- Block: All other traffic
- Internal: Component isolation
```
@ -414,7 +415,7 @@ app.example.com {
- [ ] All secrets in environment variables
- [ ] Database encryption enabled (PostgreSQL)
- [ ] MinIO encryption enabled
- [ ] Drive encryption enabled
- [ ] Caddy TLS configured (automatic with Let's Encrypt)
- [ ] Rate limiting enabled (Caddy)
- [ ] CORS properly configured (Caddy)
@ -436,7 +437,7 @@ app.example.com {
- [ ] Regular security audits scheduled
- [ ] Penetration testing completed
- [ ] Compliance requirements met
- [ ] Disaster recovery tested (PostgreSQL, MinIO backups)
- [ ] Disaster recovery tested (PostgreSQL, Drive backups)
- [ ] Access reviews scheduled (Zitadel)
- [ ] Security training completed
- [ ] Stalwart email security configured (DKIM, SPF, DMARC)
@ -455,7 +456,7 @@ For security issues or questions:
- [Caddy Security](https://caddyserver.com/docs/security) - Reverse proxy and TLS
- [PostgreSQL Security](https://www.postgresql.org/docs/current/security.html) - Database
- [Zitadel Security](https://zitadel.com/docs/guides/manage/security) - Identity and access
- [MinIO Security](https://min.io/docs/minio/linux/operations/security.html) - Object storage
- [Drive Security](https://min.io/docs/minio/linux/operations/security.html) - S3-compatible object storage
- [Qdrant Security](https://qdrant.tech/documentation/guides/security/) - Vector database
- [Valkey Security](https://valkey.io/topics/security/) - Cache

View file

@ -20,11 +20,6 @@ AI features are currently available through:
- Bot uses configured LLM provider
- Responses streamed back
3. **Answer Modes Configuration**
- Set in bot's config.csv
- Controls LLM behavior
- Document reference, tools, or direct modes
## Planned Endpoints
### Text Generation

View file

@ -14,12 +14,12 @@ Each component is registered automatically and downloaded from verified open-sou
### Valkey (Cache)
- **Source:** [valkey.io](https://valkey.io)
- **Purpose:** In-memory caching system compatible with Redis.
- **Purpose:** In-memory caching system (Redis-compatible).
- **License:** BSD 3-Clause
### MinIO (Drive)
### Drive (S3-Compatible Storage)
- **Source:** [min.io](https://min.io)
- **Purpose:** Object storage compatible with Amazon S3.
- **Purpose:** S3-compatible object storage for file management.
- **License:** AGPLv3
### Qdrant (Vector Database)

View file

@ -144,7 +144,7 @@ When implemented, the Whiteboard API will:
1. **Use WebSocket** for real-time collaboration
2. **Implement CRDT** for conflict-free editing
3. **Store in PostgreSQL** with JSON columns
4. **Cache in Redis** for performance
4. **Cache in cache component** for performance
5. **Use SVG** as primary format
6. **Support touch devices** and stylus input
7. **Include access controls** and permissions
@ -183,7 +183,7 @@ The planned implementation will use:
- **Y.js** or **OT.js** - Collaborative editing
- **Fabric.js** - Canvas manipulation
- **PostgreSQL** - Data persistence
- **Redis** - Real-time state
- **Cache** - Real-time state
- **Sharp** - Image processing
## Workaround Example

View file

@ -1,7 +1,7 @@
# Glossary
## A
**Answer Mode** - Configuration that determines how the bot responds to user queries (direct LLM, with tools, documents only, etc.)
**Argon2** - Password hashing algorithm used for secure credential storage (winner of Password Hashing Competition)
@ -47,7 +47,8 @@
## M
**MCP** - Model Context Protocol, a standard for tool definitions
**MinIO** - S3-compatible object storage used for file management
**Drive** - S3-compatible object storage component used for file management
## P
**Parameter** - Input definition for tools that specifies type, format, and description

View file

@ -30,9 +30,9 @@ BotServer is implemented as a single Rust crate (version 6.0.8) with modular com
### Infrastructure Modules
- **`bootstrap`** - Automated system initialization and component installation
- **`package_manager`** - Manages 20+ components (PostgreSQL, Redis, MinIO, Qdrant, etc.)
- **`package_manager`** - Manages 20+ components (PostgreSQL, cache, drive, Qdrant, etc.)
- **`web_server`** - Axum-based HTTP API and WebSocket server
- **`drive`** - MinIO/S3 object storage and vector database integration
- **`drive`** - S3-compatible object storage and vector database integration
- **`config`** - Application configuration from `.env` and database
### Feature Modules
@ -73,7 +73,7 @@ templates/
- **`.gbkb`** - Document collections for semantic search
- **`.gbot`** - Bot configuration in `config.csv` format
- **`.gbtheme`** - Optional UI customization (CSS/HTML)
- **`.gbdrive`** - MinIO/S3 storage integration
- **`.gbdrive`** - Drive (S3-compatible) storage integration
## Key Features
@ -116,8 +116,8 @@ Custom keywords include:
BotServer automatically installs and configures:
1. **PostgreSQL** - User accounts, sessions, bot configuration
2. **Redis/Valkey** - Session cache and temporary data
3. **MinIO** - S3-compatible object storage
2. **Cache (Valkey)** - Session cache and temporary data
3. **Drive** - S3-compatible object storage
4. **Qdrant** - Vector database for semantic search
5. **Local LLM** - Optional local model server
6. **Email Server** - Optional SMTP/IMAP
@ -162,8 +162,8 @@ Flexible AI provider support:
### Storage Architecture
- **PostgreSQL**: Structured data (users, bots, sessions, messages)
- **Redis**: Session cache and rate limiting
- **MinIO/S3**: Documents, templates, and assets
- **Cache**: Session cache and rate limiting
- **Drive (S3)**: Documents, templates, and assets
- **Qdrant**: Vector embeddings for semantic search
- **File System**: Optional local caching
@ -184,8 +184,8 @@ Flexible AI provider support:
- **Web Framework**: Axum + Tower
- **Async Runtime**: Tokio
- **Database**: Diesel ORM with PostgreSQL
- **Cache**: Redis client (tokio-comp)
- **Storage**: AWS SDK S3 (MinIO compatible)
- **Cache**: Valkey client (Redis-compatible, tokio-comp)
- **Storage**: AWS SDK S3 (drive compatible)
- **Vector DB**: Qdrant client (optional feature)
- **Scripting**: Rhai engine for BASIC interpreter
- **Security**: Argon2, AES-GCM, HMAC-SHA256
@ -197,8 +197,7 @@ Flexible AI provider support:
BotServer supports multiple deployment modes:
- **Local**: Install components directly on the host system
- **Container**: Use containerized components (Docker/Podman)
- **Hybrid**: Mix of local and containerized services
- **Container**: Use LXC containers for isolation
The `package_manager` handles component lifecycle in all modes.

View file

@ -8,7 +8,9 @@ When initial attempts fail, sequentially try these LLMs:
1. **Claude (Web)**: Copy only the problem statement and create unit tests. Create/extend UI.
### Development Workflow:
- **One requirement at a time** with sequential commits
- **One requirement at a time** with sequential commits.
- Start editing docs before any code, explain user behaviour in docs first with LLM, before code in rust.
- Spend time on design and architecture before coding. Code structure of packages and skeltons with ideas concretized from documentation. But focus in docs first, because LLM can help with design and architecture, and code better.
- **On unresolved error**: Stop and use add-req.sh, and consult Claude for guidance. with DeepThining in DeepSeek also, with Web turned on.
- **Change progression**: Start with DeepSeek, conclude with gpt-oss-120b
- If a big req. fail, specify a @code file that has similar pattern or sample from official docs.
@ -19,4 +21,4 @@ When initial attempts fail, sequentially try these LLMs:
- Keep in the source codebase only deployed and tested source, no lab source code in main project. At least, use optional features to introduce new behaviour gradually in PRODUCTION.
- Transform good articles into prompts for the coder.
- Switch to libraries that have LLM affinity (LLM knows the library, was well trained).
- Ensure 'continue' on LLMs, they can EOF and say are done, but got more to output.
- Ensure 'continue' on LLMs, they can EOF and say are done, but got more to output.