docs: comprehensive keyword documentation update

Completed stub files:
- keyword-soap.md - SOAP web service integration
- keyword-merge-pdf.md - PDF merging (MERGE PDF with spaces)
- keyword-kb-statistics.md - KB statistics overview
- keyword-kb-collection-stats.md - Per-collection stats
- keyword-kb-documents-count.md - Document counting
- keyword-kb-documents-added-since.md - Recent document tracking
- keyword-kb-list-collections.md - Collection listing
- keyword-kb-storage-size.md - Storage monitoring

Updated documentation:
- keyword-generate-pdf.md - Updated to GENERATE PDF (spaces)
- keyword-delete.md - Rewritten for unified DELETE
- keyword-delete-http.md - Redirects to unified DELETE
- keyword-delete-file.md - Redirects to unified DELETE
- All keyword docs updated to use spaces not underscores

PROMPT.md updates:
- Added keyword naming rules (NO underscores)
- Added keyword mapping table
- Added quick reference for all keywords
- Added error handling examples

TASKS.md created:
- Comprehensive discrepancy report
- Model name update tracking (17 files)
- Config parameter documentation
- Architecture notes

Key clarifications:
- GENERATE FROM TEMPLATE = FILL keyword
- GENERATE WITH PROMPT = LLM keyword
- ON ERROR RESUME NEXT now implemented
- DELETE is unified (HTTP/DB/File auto-detect)
This commit is contained in:
Rodrigo Rodriguez (Pragmatismo) 2025-12-05 09:55:38 -03:00
parent bf150f3492
commit 477d1cfbc2
41 changed files with 10090 additions and 288 deletions

172
PROMPT.md
View file

@ -5,6 +5,55 @@
---
## CRITICAL: Keyword Naming Rules
**Keywords NEVER use underscores. Always use spaces.**
### ✅ Correct Syntax
```basic
SEND MAIL to, subject, body, attachments
GENERATE PDF template, data, output
MERGE PDF files, output
DELETE "url"
DELETE "table", "filter"
ON ERROR RESUME NEXT
CLEAR ERROR
SET BOT MEMORY key, value
GET BOT MEMORY key
KB STATISTICS
KB DOCUMENTS COUNT
```
### ❌ WRONG - Never Use Underscores
```basic
SEND_MAIL ' WRONG!
GENERATE_PDF ' WRONG!
MERGE_PDF ' WRONG!
DELETE_HTTP ' WRONG!
SET_HEADER ' WRONG!
```
### Keyword Mappings (Documentation to Implementation)
| Write This | NOT This |
|------------|----------|
| `SEND MAIL` | `SEND_MAIL` |
| `GENERATE PDF` | `GENERATE_PDF` |
| `MERGE PDF` | `MERGE_PDF` |
| `DELETE` | `DELETE_HTTP` |
| `SET HEADER` | `SET_HEADER` |
| `CLEAR HEADERS` | `CLEAR_HEADERS` |
| `GROUP BY` | `GROUP_BY` |
| `FOR EACH` | `FOR_EACH` |
| `EXIT FOR` | `EXIT_FOR` |
| `ON ERROR RESUME NEXT` | (no underscore version) |
### Special Keywords
- `GENERATE FROM TEMPLATE` = Use `FILL` keyword
- `GENERATE WITH PROMPT` = Use `LLM` keyword
- `DELETE` is unified - auto-detects HTTP URLs vs database tables vs files
---
## Official Icons - MANDATORY
**NEVER generate icons with LLM. ALWAYS use official SVG icons from `src/assets/icons/`.**
@ -161,6 +210,25 @@ botbook/
TALK "Hello, world!"
name = HEAR
TALK "Welcome, " + name
' Error handling (VB-style)
ON ERROR RESUME NEXT
result = SOME_OPERATION()
IF ERROR THEN
TALK "Error: " + ERROR MESSAGE
END IF
ON ERROR GOTO 0
' Email - note: spaces not underscores
SEND MAIL "user@example.com", "Subject", "Body", []
' PDF generation - note: spaces not underscores
GENERATE PDF "template.html", data, "output.pdf"
' Unified DELETE - auto-detects context
DELETE "https://api.example.com/resource/123" ' HTTP DELETE
DELETE "customers", "status = 'inactive'" ' Database DELETE
DELETE "temp/file.txt" ' File DELETE
```
<!-- For configuration -->
@ -239,6 +307,19 @@ When documenting features, verify against actual source:
| Configuration | `botserver/src/core/config/` |
| Bootstrap | `botserver/src/core/bootstrap/` |
| Templates | `botserver/templates/` |
| Error Handling | `botserver/src/basic/keywords/errors/` |
### Key Implementation Files
| Keyword Category | File |
|-----------------|------|
| `SEND MAIL`, `SEND TEMPLATE` | `send_mail.rs` |
| `GENERATE PDF`, `MERGE PDF` | `file_operations.rs` |
| `DELETE`, `INSERT`, `UPDATE` | `data_operations.rs` |
| `POST`, `PUT`, `GRAPHQL`, `SOAP` | `http_operations.rs` |
| `ON ERROR RESUME NEXT` | `errors/on_error.rs` |
| `KB STATISTICS`, `KB DOCUMENTS COUNT` | `kb_statistics.rs` |
| `LLM` | `llm_keyword.rs` |
| `FILL` (template filling) | `data_operations.rs` |
---
@ -277,6 +358,9 @@ Before committing documentation:
- [ ] SUMMARY.md is updated
- [ ] mdbook build succeeds without errors
- [ ] Content matches actual source code
- [ ] **NO underscores in keyword names** (use spaces)
- [ ] Model names are generic or current (no gpt-4o, claude-3, etc.)
- [ ] Error handling uses `ON ERROR RESUME NEXT` pattern correctly
---
@ -287,4 +371,90 @@ Before committing documentation:
- **Clarity**: Accessible to BASIC enthusiasts
- **Version**: Always reference 6.1.0
- **Examples**: Only working, tested code
- **Structure**: Follow mdBook conventions
- **Structure**: Follow mdBook conventions
- **Keywords**: NEVER use underscores - always spaces
- **Models**: Use generic references or current model names
- **Errors**: Document `ON ERROR RESUME NEXT` for error handling
---
## Quick Reference: Implemented Keywords
### Error Handling
```basic
ON ERROR RESUME NEXT ' Enable error trapping
ON ERROR GOTO 0 ' Disable error trapping
CLEAR ERROR ' Clear error state
IF ERROR THEN ' Check if error occurred
ERROR MESSAGE ' Get last error message
ERR ' Get error number
THROW "message" ' Raise an error
```
### HTTP Operations
```basic
POST "url", data
PUT "url", data
PATCH "url", data
DELETE "url" ' Unified - works for HTTP, DB, files
SET HEADER "name", "value"
CLEAR HEADERS
GRAPHQL "url", query, variables
SOAP "wsdl", "operation", params
```
### File & PDF Operations
```basic
READ "path"
WRITE "path", content
COPY "source", "dest"
MOVE "source", "dest"
LIST "path"
COMPRESS files, "archive.zip"
EXTRACT "archive.zip", "dest"
UPLOAD file
DOWNLOAD "url"
GENERATE PDF template, data, "output.pdf"
MERGE PDF files, "output.pdf"
```
### Data Operations
```basic
FIND "table", "filter"
SAVE "table", data
INSERT "table", data
UPDATE "table", data, "filter"
DELETE "table", "filter"
MERGE "table", data, "key"
FILL data, template ' Template filling (was GENERATE FROM TEMPLATE)
MAP data, "mapping"
FILTER data, "condition"
AGGREGATE data, "operation"
```
### Knowledge Base
```basic
USE KB
USE KB "collection"
CLEAR KB
KB STATISTICS
KB COLLECTION STATS "name"
KB DOCUMENTS COUNT
KB DOCUMENTS ADDED SINCE days
KB LIST COLLECTIONS
KB STORAGE SIZE
```
### LLM Operations
```basic
result = LLM "prompt" ' Generate with LLM (was GENERATE WITH PROMPT)
USE MODEL "model-name"
```
### Communication
```basic
TALK "message"
HEAR variable
SEND MAIL to, subject, body, attachments
SEND TEMPLATE recipients, template, variables
```

228
TASKS.md Normal file
View file

@ -0,0 +1,228 @@
# Documentation Tasks & Discrepancies Report
**Generated:** 2025-01
**Status:** Active development
> **Summary**: This report tracks discrepancies between documentation and implementation in the General Bots codebase. Key fixes applied: keywords now use spaces (not underscores), ON ERROR RESUME NEXT implemented, DELETE unified.
---
## ✅ COMPLETED
### Stub Files Now Have Content
- [x] `keyword-soap.md` - SOAP web service integration
- [x] `keyword-merge-pdf.md` - PDF merging (now `MERGE PDF` with space)
- [x] `keyword-kb-statistics.md` - Overall KB statistics
- [x] `keyword-kb-collection-stats.md` - Per-collection statistics
- [x] `keyword-kb-documents-count.md` - Document count
- [x] `keyword-kb-documents-added-since.md` - Recent document tracking
- [x] `keyword-kb-list-collections.md` - Collection listing
- [x] `keyword-kb-storage-size.md` - Storage monitoring
### Code Changes Applied
- [x] `SEND MAIL` - Changed from `SEND_MAIL` to space-separated
- [x] `GENERATE PDF` - Changed from `GENERATE_PDF` to space-separated
- [x] `MERGE PDF` - Changed from `MERGE_PDF` to space-separated
- [x] `ON ERROR RESUME NEXT` - Implemented in `errors/on_error.rs`
- [x] `DELETE` - Unified keyword (auto-detects HTTP/DB/File)
- [x] `PROMPT.md` - Updated with keyword naming rules
---
## 🔴 CRITICAL: Model Name Updates Needed
Old model names found in documentation that should be updated:
| File | Current | Should Be |
|------|---------|-----------|
| `appendix-external-services/README.md` | `gpt-4o` | Generic or current |
| `appendix-external-services/catalog.md` | `claude-opus-4.5` | Current Anthropic models |
| `appendix-external-services/hosting-dns.md` | `GPT-4, Claude 3` | Generic reference |
| `appendix-external-services/llm-providers.md` | `claude-sonnet-4.5`, `llama-4-scout` | Current models |
| `chapter-02/gbot.md` | `GPT-4 or Claude 3` | Generic reference |
| `chapter-02/template-llm-server.md` | `gpt-4` | Generic or current |
| `chapter-02/template-llm-tools.md` | `gpt-4` | Generic or current |
| `chapter-02/templates.md` | `gpt-4` | Generic or current |
| `chapter-04-gbui/how-to/create-first-bot.md` | `gpt-4o` | Generic or current |
| `chapter-04-gbui/how-to/monitor-sessions.md` | `gpt-4o active` | Generic reference |
| `chapter-04-gbui/suite-manual.md` | `GPT-4o`, `Claude 3.5` | Current versions |
| `chapter-06-gbdialog/keyword-model-route.md` | `gpt-3.5-turbo`, `gpt-4o` | Generic or current |
| `chapter-06-gbdialog/keyword-use-model.md` | `gpt-4`, `codellama-7b` | Generic or current |
### Recommendation
Replace with:
- Generic: `your-model-name`, `{model}`, `local-model.gguf`
- Current local: `DeepSeek-R1-Distill-Qwen-1.5B`, `Qwen2.5-7B`
- Current cloud: Provider-agnostic examples
---
## ✅ RESOLVED: Keyword Syntax Clarifications
### Keywords Now Use Spaces (NOT Underscores)
| ✅ Correct | ❌ Wrong |
|-----------|----------|
| `SEND MAIL` | `SEND_MAIL` |
| `GENERATE PDF` | `GENERATE_PDF` |
| `MERGE PDF` | `MERGE_PDF` |
| `SET HEADER` | `SET_HEADER` |
| `CLEAR HEADERS` | `CLEAR_HEADERS` |
| `ON ERROR RESUME NEXT` | N/A |
| `KB STATISTICS` | `KB_STATISTICS` |
### Keyword Mappings (Documentation Aliases)
| Documented Pattern | Actual Keyword |
|-------------------|----------------|
| `GENERATE FROM TEMPLATE file WITH data` | `FILL data, template` |
| `GENERATE WITH PROMPT prompt` | `LLM prompt` |
| `DELETE HTTP url` | `DELETE url` (unified) |
| `DELETE FILE path` | `DELETE path` (unified) |
| `SEND EMAIL TO addr WITH body` | `SEND MAIL addr, subject, body, []` |
### Unified DELETE Keyword
The `DELETE` keyword now auto-detects context:
```basic
' HTTP DELETE - detects URL
DELETE "https://api.example.com/resource/123"
' Database DELETE - table + filter
DELETE "customers", "status = 'inactive'"
' File DELETE - path without URL prefix
DELETE "temp/report.pdf"
```
---
## ✅ IMPLEMENTED: Error Handling
### ON ERROR RESUME NEXT (VB-Style)
Now fully implemented in `src/basic/keywords/errors/on_error.rs`:
```basic
ON ERROR RESUME NEXT ' Enable error trapping
result = RISKY_OPERATION()
IF ERROR THEN
TALK "Error: " + ERROR MESSAGE
END IF
ON ERROR GOTO 0 ' Disable error trapping
CLEAR ERROR ' Clear error state
```
### Available Error Keywords
| Keyword | Description |
|---------|-------------|
| `ON ERROR RESUME NEXT` | Enable error trapping |
| `ON ERROR GOTO 0` | Disable error trapping |
| `ERROR` | Returns true if error occurred |
| `ERROR MESSAGE` | Get last error message |
| `ERR` | Get error number |
| `CLEAR ERROR` | Clear error state |
| `THROW "msg"` | Raise an error |
| `ASSERT cond, "msg"` | Assert condition |
---
## 🟡 CONFIG PARAMETERS
### Verified in Source Code ✅
```
llm-key, llm-url, llm-model
llm-cache, llm-cache-ttl, llm-cache-semantic, llm-cache-threshold
llm-server, llm-server-* (all server params)
embedding-url, embedding-model
email-from, email-server, email-port, email-user, email-pass
custom-server, custom-port, custom-database, custom-username, custom-password
image-generator-*, video-generator-*
botmodels-enabled, botmodels-host, botmodels-port
episodic-memory-threshold, episodic-memory-history
oauth-*-enabled, oauth-*-client-id, oauth-*-client-secret
website-max-depth, website-max-pages
```
### In Code But Missing from Default config.csv
| Parameter | Used In |
|-----------|---------|
| `llm-temperature` | console/status_panel.rs |
| `llm-server-reasoning-format` | llm/local.rs |
| `email-read-pixel` | email/mod.rs |
| `server-url` | email/mod.rs |
| `teams-app-id`, `teams-*` | core/bot/channels/teams.rs |
| `whatsapp-api-key`, `whatsapp-*` | core/bot/channels/whatsapp.rs |
| `twilio-*`, `aws-*`, `vonage-*` | basic/keywords/sms.rs |
| `qdrant-url` | vector database connection |
### In config.csv But Status Unknown
| Parameter | Status |
|-----------|--------|
| `website-expires` | ❓ Not found in search |
| `default-generator` | ❓ Not found in search |
---
## 📋 REMAINING TASKS
### Priority 1 - Documentation Updates
- [ ] Update all model names to generic/current versions (17 files)
- [ ] Update keyword examples to use spaces not underscores
- [ ] Fix `SEND EMAIL` examples to use `SEND MAIL` syntax
- [ ] Document unified `DELETE` keyword behavior
- [ ] Add `ON ERROR RESUME NEXT` to error handling docs
### Priority 2 - New Documentation Needed
- [ ] Full config.csv parameter reference
- [ ] SMS provider configuration (Twilio, Vonage, etc.)
- [ ] Teams/WhatsApp channel setup
- [ ] SOAP authentication configuration
### Priority 3 - Consider Implementing
- [ ] `POST url, data WITH headers` - HTTP with inline headers
- [ ] `GET url WITH params` - Query parameters support
- [ ] `WITH ... END WITH` blocks for object initialization
---
## 📊 Quick Reference
### Keyword Implementation Files
| Category | Source File |
|----------|-------------|
| Email | `send_mail.rs``SEND MAIL`, `SEND TEMPLATE` |
| PDF | `file_operations.rs``GENERATE PDF`, `MERGE PDF` |
| Data | `data_operations.rs``DELETE`, `INSERT`, `UPDATE`, `FILL` |
| HTTP | `http_operations.rs``POST`, `PUT`, `GRAPHQL`, `SOAP` |
| Errors | `errors/on_error.rs``ON ERROR RESUME NEXT` |
| KB Stats | `kb_statistics.rs``KB STATISTICS`, etc. |
| LLM | `llm_keyword.rs``LLM` |
### Summary Statistics
| Category | Count |
|----------|-------|
| Files with old model names | 17 |
| Stub files completed | 8 |
| Keywords fixed (underscore→space) | 6 |
| Error handling keywords added | 7 |
| Config params documented | 30+ |
---
## Architecture Notes
- **Language**: Rust with Rhai scripting engine
- **Keywords**: Registered via `register_custom_syntax()` in Rhai
- **Preprocessing**: `basic/compiler/mod.rs` normalizes some syntax
- **Config**: CSV files parsed by `ConfigManager`
- **Vector DB**: Qdrant
- **Relational DB**: PostgreSQL
- **File Storage**: MinIO/SeaweedFS

View file

@ -197,6 +197,7 @@
- [PLAY](./chapter-06-gbdialog/keyword-play.md)
- [QR CODE](./chapter-06-gbdialog/keyword-qrcode.md)
- [SEND SMS](./chapter-06-gbdialog/keyword-sms.md)
- [START MEET / JOIN MEET](./chapter-06-gbdialog/keyword-start-meet.md)
- [File Operations](./chapter-06-gbdialog/keywords-file.md)
- [READ](./chapter-06-gbdialog/keyword-read.md)
- [WRITE](./chapter-06-gbdialog/keyword-write.md)

View file

@ -10,10 +10,10 @@ This catalog provides detailed information about every external service that Gen
|----------|-------|
| **Service URL** | `https://api.openai.com/v1` |
| **Config Key** | `llm-provider=openai` |
| **API Key Config** | `llm-api-key` |
| **API Key Config** | `llm-api-key` (stored in Vault) |
| **Documentation** | [platform.openai.com/docs](https://platform.openai.com/docs) |
| **BASIC Keywords** | `LLM` |
| **Supported Models** | `gpt-4o`, `gpt-4o-mini`, `gpt-4-turbo`, `gpt-3.5-turbo` |
| **Supported Models** | `gpt-5`, `gpt-oss-120b`, `gpt-oss-20b` |
### Groq
@ -21,10 +21,10 @@ This catalog provides detailed information about every external service that Gen
|----------|-------|
| **Service URL** | `https://api.groq.com/openai/v1` |
| **Config Key** | `llm-provider=groq` |
| **API Key Config** | `llm-api-key` |
| **API Key Config** | `llm-api-key` (stored in Vault) |
| **Documentation** | [console.groq.com/docs](https://console.groq.com/docs) |
| **BASIC Keywords** | `LLM` |
| **Supported Models** | `llama-3.1-70b-versatile`, `llama-3.1-8b-instant`, `mixtral-8x7b-32768` |
| **Supported Models** | `llama-4-scout`, `llama-4-maverick`, `qwen3`, `mixtral-8x22b` |
### Anthropic
@ -32,10 +32,10 @@ This catalog provides detailed information about every external service that Gen
|----------|-------|
| **Service URL** | `https://api.anthropic.com/v1` |
| **Config Key** | `llm-provider=anthropic` |
| **API Key Config** | `llm-api-key` |
| **API Key Config** | `llm-api-key` (stored in Vault) |
| **Documentation** | [docs.anthropic.com](https://docs.anthropic.com) |
| **BASIC Keywords** | `LLM` |
| **Supported Models** | `claude-3-5-sonnet`, `claude-3-opus`, `claude-3-haiku` |
| **Supported Models** | `claude-opus-4.5`, `claude-sonnet-4.5` |
### Azure OpenAI
@ -43,10 +43,54 @@ This catalog provides detailed information about every external service that Gen
|----------|-------|
| **Service URL** | `https://{resource}.openai.azure.com/` |
| **Config Key** | `llm-provider=azure` |
| **API Key Config** | `llm-api-key`, `azure-openai-endpoint` |
| **API Key Config** | `llm-api-key` (stored in Vault) |
| **Documentation** | [learn.microsoft.com/azure/ai-services/openai](https://learn.microsoft.com/azure/ai-services/openai) |
| **BASIC Keywords** | `LLM` |
### Google (Gemini)
| Property | Value |
|----------|-------|
| **Service URL** | `https://generativelanguage.googleapis.com/v1` |
| **Config Key** | `llm-provider=google` |
| **API Key Config** | `llm-api-key` (stored in Vault) |
| **Documentation** | [ai.google.dev/docs](https://ai.google.dev/docs) |
| **BASIC Keywords** | `LLM` |
| **Supported Models** | `gemini-3-pro`, `gemini-2.5-pro`, `gemini-2.5-flash` |
### xAI (Grok)
| Property | Value |
|----------|-------|
| **Service URL** | `https://api.x.ai/v1` |
| **Config Key** | `llm-provider=xai` |
| **API Key Config** | `llm-api-key` (stored in Vault) |
| **Documentation** | [docs.x.ai](https://docs.x.ai) |
| **BASIC Keywords** | `LLM` |
| **Supported Models** | `grok-4` |
### DeepSeek
| Property | Value |
|----------|-------|
| **Service URL** | `https://api.deepseek.com/v1` |
| **Config Key** | `llm-provider=deepseek` |
| **API Key Config** | `llm-api-key` (stored in Vault) |
| **Documentation** | [platform.deepseek.com/docs](https://platform.deepseek.com/docs) |
| **BASIC Keywords** | `LLM` |
| **Supported Models** | `deepseek-v3.1`, `deepseek-r1` |
### Mistral AI
| Property | Value |
|----------|-------|
| **Service URL** | `https://api.mistral.ai/v1` |
| **Config Key** | `llm-provider=mistral` |
| **API Key Config** | `llm-api-key` (stored in Vault) |
| **Documentation** | [docs.mistral.ai](https://docs.mistral.ai) |
| **BASIC Keywords** | `LLM` |
| **Supported Models** | `mixtral-8x22b` |
---
## Weather Services

View file

@ -6,37 +6,35 @@ General Bots supports multiple Large Language Model (LLM) providers, both cloud-
LLMs are the intelligence behind General Bots' conversational capabilities. You can configure:
- **Cloud Providers** - External APIs (OpenAI, Anthropic, Groq, etc.)
- **Local Models** - Self-hosted models via llama.cpp
- **Hybrid** - Use local for simple tasks, cloud for complex reasoning
- **Cloud Providers** — External APIs (OpenAI, Anthropic, Google, etc.)
- **Local Models** Self-hosted models via llama.cpp
- **Hybrid** Use local for simple tasks, cloud for complex reasoning
## Cloud Providers
### OpenAI (GPT Series)
The most widely known LLM provider, offering GPT-4 and GPT-4o models.
The most widely known LLM provider, offering the GPT-5 flagship model.
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| GPT-4o | 128K | General purpose, vision | Fast |
| GPT-4o-mini | 128K | Cost-effective tasks | Very Fast |
| GPT-4 Turbo | 128K | Complex reasoning | Medium |
| o1-preview | 128K | Advanced reasoning, math | Slow |
| o1-mini | 128K | Code, logic tasks | Medium |
| GPT-5 | 1M | All-in-one advanced reasoning | Medium |
| GPT-oss 120B | 128K | Open-weight, agent workflows | Medium |
| GPT-oss 20B | 128K | Cost-effective open-weight | Fast |
**Configuration:**
**Configuration (config.csv):**
```csv
name,value
llm-provider,openai
llm-api-key,sk-xxxxxxxxxxxxxxxxxxxxxxxx
llm-model,gpt-4o
llm-model,gpt-5
```
**Strengths:**
- Most advanced all-in-one model
- Excellent general knowledge
- Strong code generation
- Good instruction following
- Vision capabilities (GPT-4o)
**Considerations:**
- API costs can add up
@ -45,79 +43,47 @@ llm-model,gpt-4o
### Anthropic (Claude Series)
Known for safety, helpfulness, and large context windows.
Known for safety, helpfulness, and extended thinking capabilities.
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| Claude 3.5 Sonnet | 200K | Best balance of capability/speed | Fast |
| Claude 3.5 Haiku | 200K | Quick, everyday tasks | Very Fast |
| Claude 3 Opus | 200K | Most capable, complex tasks | Slow |
| Claude Opus 4.5 | 200K | Most capable, complex reasoning | Slow |
| Claude Sonnet 4.5 | 200K | Best balance of capability/speed | Fast |
**Configuration:**
**Configuration (config.csv):**
```csv
name,value
llm-provider,anthropic
llm-api-key,sk-ant-xxxxxxxxxxxxxxxx
llm-model,claude-3-5-sonnet-20241022
llm-model,claude-sonnet-4.5
```
**Strengths:**
- Largest context window (200K tokens)
- Extended thinking mode for multi-step tasks
- Excellent at following complex instructions
- Strong coding abilities
- Better at refusing harmful requests
**Considerations:**
- Premium pricing
- No vision in all models
- Newer provider, smaller ecosystem
### Groq
Ultra-fast inference using custom LPU hardware. Offers open-source models at high speed.
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| Llama 3.3 70B | 128K | Complex reasoning | Very Fast |
| Llama 3.1 8B | 128K | Quick responses | Extremely Fast |
| Mixtral 8x7B | 32K | Balanced performance | Very Fast |
| Gemma 2 9B | 8K | Lightweight tasks | Extremely Fast |
**Configuration:**
```csv
llm-provider,groq
llm-api-key,gsk_xxxxxxxxxxxxxxxx
llm-model,llama-3.3-70b-versatile
```
**Strengths:**
- Fastest inference speeds (500+ tokens/sec)
- Competitive pricing
- Open-source models
- Great for real-time applications
**Considerations:**
- Limited model selection
- Rate limits on free tier
- Models may be less capable than GPT-4/Claude
### Google (Gemini Series)
Google's multimodal AI models with strong reasoning capabilities.
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| Gemini 1.5 Pro | 2M | Extremely long documents | Medium |
| Gemini 1.5 Flash | 1M | Fast multimodal | Fast |
| Gemini 2.0 Flash | 1M | Latest capabilities | Fast |
| Gemini 3 Pro | 2M | Complex reasoning, benchmarks | Medium |
| Gemini 2.5 Pro | 2M | Extremely long documents | Medium |
| Gemini 2.5 Flash | 1M | Fast multimodal | Fast |
**Configuration:**
**Configuration (config.csv):**
```csv
name,value
llm-provider,google
llm-api-key,AIzaxxxxxxxxxxxxxxxx
llm-model,gemini-1.5-pro
llm-model,gemini-3-pro
```
**Strengths:**
@ -127,69 +93,116 @@ llm-model,gemini-1.5-pro
- Good coding abilities
**Considerations:**
- Newer ecosystem
- Some features region-limited
- API changes more frequently
### xAI (Grok Series)
Integration with real-time data from X platform.
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| Grok 4 | 128K | Real-time research, analysis | Fast |
**Configuration (config.csv):**
```csv
name,value
llm-provider,xai
llm-model,grok-4
```
**Strengths:**
- Real-time data access from X
- Strong research and analysis
- Good for trend analysis
**Considerations:**
- Newer provider
- X platform integration focus
### Groq
Ultra-fast inference using custom LPU hardware. Offers open-source models at high speed.
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| Llama 4 Scout | 10M | Long context, multimodal | Very Fast |
| Llama 4 Maverick | 1M | Complex tasks | Very Fast |
| Qwen3 | 128K | Efficient MoE architecture | Extremely Fast |
**Configuration (config.csv):**
```csv
name,value
llm-provider,groq
llm-model,llama-4-scout
```
**Strengths:**
- Fastest inference speeds (500+ tokens/sec)
- Competitive pricing
- Open-source models
- Great for real-time applications
**Considerations:**
- Rate limits on free tier
- Models may be less capable than GPT-5/Claude
### Mistral AI
European AI company offering efficient, open-weight models.
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| Mistral Large | 128K | Complex tasks | Medium |
| Mistral Medium | 32K | Balanced performance | Fast |
| Mistral Small | 32K | Cost-effective | Very Fast |
| Codestral | 32K | Code generation | Fast |
| Mixtral-8x22B | 64K | Multi-language, coding | Fast |
**Configuration:**
**Configuration (config.csv):**
```csv
name,value
llm-provider,mistral
llm-api-key,xxxxxxxxxxxxxxxx
llm-model,mistral-large-latest
llm-model,mixtral-8x22b
```
**Strengths:**
- European data sovereignty (GDPR)
- Excellent code generation (Codestral)
- Excellent code generation
- Open-weight models available
- Competitive pricing
- Proficient in multiple languages
**Considerations:**
- Smaller context than competitors
- Less brand recognition
- Fewer fine-tuning options
### DeepSeek
Chinese AI company known for efficient, capable models.
Known for efficient, capable models with exceptional reasoning.
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| DeepSeek-V3 | 128K | General purpose | Fast |
| DeepSeek-R1 | 128K | Reasoning, math | Medium |
| DeepSeek-Coder | 128K | Programming | Fast |
| DeepSeek-V3.1 | 128K | General purpose, optimized cost | Fast |
| DeepSeek-R1 | 128K | Reasoning, math, science | Medium |
**Configuration:**
**Configuration (config.csv):**
```csv
name,value
llm-provider,deepseek
llm-api-key,sk-xxxxxxxxxxxxxxxx
llm-model,deepseek-chat
llm-model,deepseek-r1
llm-server-url,https://api.deepseek.com
```
**Strengths:**
- Extremely cost-effective
- Strong reasoning (R1 model)
- Excellent code generation
- Open-weight versions available
- Rivals proprietary leaders in performance
- Open-weight versions available (MIT/Apache 2.0)
**Considerations:**
- Data processed in China
- Newer provider
- May have content restrictions
## Local Models
@ -200,8 +213,9 @@ Run models on your own hardware for privacy, cost control, and offline operation
General Bots uses **llama.cpp** server for local inference:
```csv
name,value
llm-provider,local
llm-server-url,https://localhost:8081
llm-server-url,http://localhost:8081
llm-model,DeepSeek-R1-Distill-Qwen-1.5B
```
@ -211,42 +225,43 @@ llm-model,DeepSeek-R1-Distill-Qwen-1.5B
| Model | Size | VRAM | Quality |
|-------|------|------|---------|
| GPT-OSS 120B Q4 | 70GB | 48GB+ | Excellent |
| Llama 3.1 70B Q4 | 40GB | 48GB+ | Excellent |
| Llama 4 Scout 17B Q8 | 18GB | 24GB | Excellent |
| Qwen3 72B Q4 | 42GB | 48GB+ | Excellent |
| DeepSeek-R1 32B Q4 | 20GB | 24GB | Very Good |
| Qwen 2.5 72B Q4 | 42GB | 48GB+ | Excellent |
#### For Mid-Range GPU (12-16GB VRAM)
| Model | Size | VRAM | Quality |
|-------|------|------|---------|
| GPT-OSS 20B F16 | 40GB | 16GB | Very Good |
| Llama 3.1 8B Q8 | 9GB | 12GB | Good |
| Qwen3 14B Q8 | 15GB | 16GB | Very Good |
| GPT-oss 20B Q4 | 12GB | 16GB | Very Good |
| DeepSeek-R1-Distill 14B Q4 | 8GB | 12GB | Good |
| Mistral Nemo 12B Q4 | 7GB | 10GB | Good |
| Gemma 3 27B Q4 | 16GB | 16GB | Good |
#### For Small GPU or CPU (8GB VRAM or less)
| Model | Size | VRAM | Quality |
|-------|------|------|---------|
| DeepSeek-R1-Distill 1.5B Q4 | 1GB | 4GB | Basic |
| Phi-3 Mini 3.8B Q4 | 2.5GB | 6GB | Acceptable |
| Gemma 2 2B Q8 | 3GB | 6GB | Acceptable |
| Qwen 2.5 3B Q4 | 2GB | 4GB | Basic |
| Gemma 2 9B Q4 | 5GB | 8GB | Acceptable |
| Gemma 3 27B Q2 | 10GB | 8GB | Acceptable |
### Model Download URLs
Add models to `installer.rs` data_download_list:
```rust
// GPT-OSS 20B - Recommended for small GPU
"https://huggingface.co/unsloth/gpt-oss-20b-GGUF/resolve/main/gpt-oss-20b-F16.gguf"
// Qwen3 14B - Recommended for mid-range GPU
"https://huggingface.co/Qwen/Qwen3-14B-GGUF/resolve/main/qwen3-14b-q4_k_m.gguf"
// DeepSeek R1 Distill - For CPU or minimal GPU
"https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M.gguf"
// Llama 3.1 8B - Good balance
"https://huggingface.co/bartowski/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf"
// GPT-oss 20B - Good balance for agents
"https://huggingface.co/openai/gpt-oss-20b-GGUF/resolve/main/gpt-oss-20b-q4_k_m.gguf"
// Gemma 3 27B - For quality local inference
"https://huggingface.co/google/gemma-3-27b-it-GGUF/resolve/main/gemma-3-27b-it-q4_k_m.gguf"
```
### Embedding Models
@ -254,8 +269,9 @@ Add models to `installer.rs` data_download_list:
For vector search, you need an embedding model:
```csv
name,value
embedding-provider,local
embedding-server-url,https://localhost:8082
embedding-server-url,http://localhost:8082
embedding-model,bge-small-en-v1.5
```
@ -273,19 +289,13 @@ Recommended embedding models:
Use different models for different tasks:
```csv
# Primary model for complex conversations
name,value
llm-provider,anthropic
llm-model,claude-3-5-sonnet-20241022
# Fast model for simple tasks
llm-model,claude-sonnet-4.5
llm-fast-provider,groq
llm-fast-model,llama-3.1-8b-instant
# Local fallback for offline operation
llm-fast-model,llama-4-scout
llm-fallback-provider,local
llm-fallback-model,DeepSeek-R1-Distill-Qwen-1.5B
# Embeddings always local
embedding-provider,local
embedding-model,bge-small-en-v1.5
```
@ -296,13 +306,15 @@ embedding-model,bge-small-en-v1.5
| Use Case | Recommended | Why |
|----------|-------------|-----|
| Customer support | Claude 3.5 Sonnet | Best at following guidelines |
| Code generation | DeepSeek-Coder, GPT-4o | Specialized for code |
| Document analysis | Gemini 1.5 Pro | 2M context window |
| Real-time chat | Groq Llama 3.1 8B | Fastest responses |
| Customer support | Claude Sonnet 4.5 | Best at following guidelines |
| Code generation | DeepSeek-R1, GPT-5 | Specialized for code |
| Document analysis | Gemini 3 Pro | 2M context window |
| Real-time chat | Groq Llama 4 Scout | Fastest responses |
| Privacy-sensitive | Local DeepSeek-R1 | No external data transfer |
| Cost-sensitive | DeepSeek-V3, Local | Lowest cost per token |
| Complex reasoning | Claude 3 Opus, o1 | Best reasoning ability |
| Cost-sensitive | DeepSeek-V3.1, Local | Lowest cost per token |
| Complex reasoning | Claude Opus 4.5, Gemini 3 Pro | Best reasoning ability |
| Real-time research | Grok 4 | Live data access |
| Long context (10M) | Llama 4 Scout | Largest context window |
### By Budget
@ -310,51 +322,53 @@ embedding-model,bge-small-en-v1.5
|--------|-------------------|
| Free | Local models only |
| Low ($10-50/mo) | Groq + Local fallback |
| Medium ($50-200/mo) | GPT-4o-mini + Claude Haiku |
| High ($200+/mo) | GPT-4o + Claude Sonnet |
| Medium ($50-200/mo) | DeepSeek-V3.1 + Claude Sonnet 4.5 |
| High ($200+/mo) | GPT-5 + Claude Opus 4.5 |
| Enterprise | Private deployment + premium APIs |
## Configuration Reference
### Environment Variables
```bash
# Primary LLM
LLM_PROVIDER=openai
LLM_API_KEY=sk-xxx
LLM_MODEL=gpt-4o
LLM_SERVER_URL=https://api.openai.com
# Local LLM Server
LLM_LOCAL_URL=https://localhost:8081
LLM_LOCAL_MODEL=DeepSeek-R1-Distill-Qwen-1.5B
# Embedding
EMBEDDING_PROVIDER=local
EMBEDDING_URL=https://localhost:8082
EMBEDDING_MODEL=bge-small-en-v1.5
```
### config.csv Parameters
All LLM configuration belongs in `config.csv`, not environment variables:
| Parameter | Description | Example |
|-----------|-------------|---------|
| `llm-provider` | Provider name | `openai`, `anthropic`, `local` |
| `llm-api-key` | API key for cloud providers | `sk-xxx` |
| `llm-model` | Model identifier | `gpt-4o` |
| `llm-server-url` | API endpoint | `https://api.openai.com` |
| `llm-model` | Model identifier | `gpt-5` |
| `llm-server-url` | API endpoint (local only) | `http://localhost:8081` |
| `llm-server-ctx-size` | Context window size | `128000` |
| `llm-temperature` | Response randomness (0-2) | `0.7` |
| `llm-max-tokens` | Maximum response length | `4096` |
| `llm-cache-enabled` | Enable semantic caching | `true` |
| `llm-cache-ttl` | Cache time-to-live (seconds) | `3600` |
### API Keys
API keys are stored in **Vault**, not in config files or environment variables:
```bash
# Store API key in Vault
vault kv put gbo/llm/openai api_key="sk-..."
vault kv put gbo/llm/anthropic api_key="sk-ant-..."
vault kv put gbo/llm/google api_key="AIza..."
```
Reference in config.csv:
```csv
name,value
llm-provider,openai
llm-model,gpt-5
llm-api-key,vault:gbo/llm/openai/api_key
```
## Security Considerations
### Cloud Providers
- API keys should be stored in environment variables or secrets manager
- Consider data residency requirements (EU: Mistral, US: OpenAI)
- API keys stored in Vault, never in config files
- Consider data residency requirements (EU: Mistral)
- Review provider data retention policies
- Use separate keys for production/development
@ -372,6 +386,7 @@ EMBEDDING_MODEL=bge-small-en-v1.5
Enable semantic caching to reduce API calls:
```csv
name,value
llm-cache-enabled,true
llm-cache-ttl,3600
llm-cache-similarity-threshold,0.92
@ -382,6 +397,7 @@ llm-cache-similarity-threshold,0.92
For bulk operations, use batch APIs when available:
```csv
name,value
llm-batch-enabled,true
llm-batch-size,10
```
@ -391,6 +407,7 @@ llm-batch-size,10
Optimize context window usage with episodic memory:
```csv
name,value
episodic-memory-enabled,true
episodic-memory-threshold,4
episodic-memory-history,2
@ -404,9 +421,9 @@ See [Episodic Memory](../chapter-03/episodic-memory.md) for details.
### Common Issues
**API Key Invalid**
- Verify key is correct and not expired
- Verify key is stored correctly in Vault
- Check if key has required permissions
- Ensure billing is active
- Ensure billing is active on provider account
**Model Not Found**
- Check model name spelling
@ -428,13 +445,30 @@ See [Episodic Memory](../chapter-03/episodic-memory.md) for details.
Enable LLM logging for debugging:
```csv
name,value
llm-log-requests,true
llm-log-responses,false
llm-log-timing,true
```
## 2025 Model Comparison
| Model | Creator | Type | Strengths |
|-------|---------|------|-----------|
| GPT-5 | OpenAI | Proprietary | Most advanced all-in-one |
| Claude Opus/Sonnet 4.5 | Anthropic | Proprietary | Extended thinking, complex reasoning |
| Gemini 3 Pro | Google | Proprietary | Benchmarks, reasoning |
| Grok 4 | xAI | Proprietary | Real-time X data |
| DeepSeek-V3.1/R1 | DeepSeek | Open (MIT/Apache) | Cost-optimized, reasoning |
| Llama 4 | Meta | Open-weight | 10M context, multimodal |
| Qwen3 | Alibaba | Open (Apache) | Efficient MoE |
| Mixtral-8x22B | Mistral | Open (Apache) | Multi-language, coding |
| GPT-oss | OpenAI | Open (Apache) | Agent workflows |
| Gemma 2/3 | Google | Open-weight | Lightweight, efficient |
## Next Steps
- [LLM Configuration](../chapter-08-config/llm-config.md) - Detailed configuration guide
- [Semantic Caching](../chapter-03/caching.md) - Cache configuration
- [NVIDIA GPU Setup](../chapter-09-tools/nvidia-gpu-setup.md) - GPU configuration for local models
- [config.csv Reference](../chapter-08-config/config-csv.md) — Complete configuration guide
- [Secrets Management](../chapter-08-config/secrets-management.md) — Vault integration
- [Semantic Caching](../chapter-03/caching.md) — Cache configuration
- [NVIDIA GPU Setup](../appendix-external-services/nvidia.md) — GPU configuration for local models

View file

@ -52,5 +52,4 @@ TALK "I've set up our specialist team. Just ask about orders, support, or sales!
## See Also
- [A2A Protocol](./keyword-a2a.md)
- [DELEGATE TO BOT](./keyword-delegate-to-bot.md)
- [DELEGATE TO BOT](./keyword-delegate-to-bot.md) - Includes A2A Protocol details

View file

@ -1 +1,326 @@
# AGGREGATE
The `AGGREGATE` keyword performs calculations on collections of data, computing sums, counts, averages, and other statistical operations.
---
## Syntax
```basic
result = AGGREGATE collection SUM field
result = AGGREGATE collection COUNT
result = AGGREGATE collection AVERAGE field
result = AGGREGATE collection MIN field
result = AGGREGATE collection MAX field
result = AGGREGATE "table_name" SUM field WHERE condition
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `collection` | Array/String | Data array or table name |
| `SUM` | Operation | Calculate total of numeric field |
| `COUNT` | Operation | Count number of items |
| `AVERAGE` | Operation | Calculate arithmetic mean |
| `MIN` | Operation | Find minimum value |
| `MAX` | Operation | Find maximum value |
| `field` | String | Field name to aggregate |
| `WHERE` | Clause | Optional filter condition |
---
## Description
`AGGREGATE` performs mathematical and statistical calculations on data collections. It can work with in-memory arrays or query database tables directly. Use it to compute totals, counts, averages, and find extreme values.
Use cases include:
- Calculating order totals
- Counting records
- Computing averages for reports
- Finding highest/lowest values
- Summarizing data for dashboards
---
## Examples
### Sum Values
```basic
' Calculate total sales
orders = FIND "orders" WHERE status = "completed"
total_sales = AGGREGATE orders SUM amount
TALK "Total sales: $" + FORMAT(total_sales, "#,##0.00")
```
### Count Records
```basic
' Count active users
active_count = AGGREGATE "users" COUNT WHERE status = "active"
TALK "We have " + active_count + " active users"
```
### Calculate Average
```basic
' Calculate average order value
avg_order = AGGREGATE "orders" AVERAGE amount WHERE created_at > "2025-01-01"
TALK "Average order value: $" + FORMAT(avg_order, "#,##0.00")
```
### Find Minimum and Maximum
```basic
' Find price range
products = FIND "products" WHERE category = "electronics"
min_price = AGGREGATE products MIN price
max_price = AGGREGATE products MAX price
TALK "Prices range from $" + min_price + " to $" + max_price
```
### Multiple Aggregations
```basic
' Calculate multiple statistics
orders = FIND "orders" WHERE customer_id = user.id
total_spent = AGGREGATE orders SUM amount
order_count = AGGREGATE orders COUNT
avg_order = AGGREGATE orders AVERAGE amount
largest_order = AGGREGATE orders MAX amount
TALK "Your order summary:"
TALK "- Total orders: " + order_count
TALK "- Total spent: $" + FORMAT(total_spent, "#,##0.00")
TALK "- Average order: $" + FORMAT(avg_order, "#,##0.00")
TALK "- Largest order: $" + FORMAT(largest_order, "#,##0.00")
```
---
## Common Use Cases
### Sales Dashboard
```basic
' Calculate sales metrics
today = FORMAT(NOW(), "YYYY-MM-DD")
this_month = FORMAT(NOW(), "YYYY-MM") + "-01"
today_sales = AGGREGATE "orders" SUM amount WHERE DATE(created_at) = today
month_sales = AGGREGATE "orders" SUM amount WHERE created_at >= this_month
today_count = AGGREGATE "orders" COUNT WHERE DATE(created_at) = today
month_count = AGGREGATE "orders" COUNT WHERE created_at >= this_month
TALK "📊 Sales Dashboard"
TALK "Today: $" + FORMAT(today_sales, "#,##0.00") + " (" + today_count + " orders)"
TALK "This month: $" + FORMAT(month_sales, "#,##0.00") + " (" + month_count + " orders)"
```
### Inventory Summary
```basic
' Calculate inventory metrics
total_items = AGGREGATE "products" COUNT
total_value = AGGREGATE "products" SUM (price * stock)
low_stock = AGGREGATE "products" COUNT WHERE stock < 10
out_of_stock = AGGREGATE "products" COUNT WHERE stock = 0
TALK "Inventory Summary:"
TALK "- Total products: " + total_items
TALK "- Total value: $" + FORMAT(total_value, "#,##0.00")
TALK "- Low stock items: " + low_stock
TALK "- Out of stock: " + out_of_stock
```
### Customer Metrics
```basic
' Calculate customer statistics
total_customers = AGGREGATE "customers" COUNT
new_this_month = AGGREGATE "customers" COUNT WHERE created_at >= this_month
avg_lifetime_value = AGGREGATE "customers" AVERAGE lifetime_value
TALK "Customer Metrics:"
TALK "- Total customers: " + total_customers
TALK "- New this month: " + new_this_month
TALK "- Avg lifetime value: $" + FORMAT(avg_lifetime_value, "#,##0.00")
```
### Rating Analysis
```basic
' Analyze product ratings
reviews = FIND "reviews" WHERE product_id = product.id
avg_rating = AGGREGATE reviews AVERAGE rating
review_count = AGGREGATE reviews COUNT
five_stars = AGGREGATE reviews COUNT WHERE rating = 5
TALK "Product rating: " + FORMAT(avg_rating, "#.#") + " stars"
TALK "Based on " + review_count + " reviews"
TALK five_stars + " customers gave 5 stars"
```
---
## Aggregate from Array
```basic
' Aggregate in-memory data
prices = [29.99, 49.99, 19.99, 99.99, 39.99]
total = AGGREGATE prices SUM
count = AGGREGATE prices COUNT
average = AGGREGATE prices AVERAGE
minimum = AGGREGATE prices MIN
maximum = AGGREGATE prices MAX
TALK "Sum: $" + FORMAT(total, "#,##0.00")
TALK "Count: " + count
TALK "Average: $" + FORMAT(average, "#,##0.00")
TALK "Range: $" + minimum + " - $" + maximum
```
---
## Aggregate with Expressions
```basic
' Calculate computed values
total_revenue = AGGREGATE "order_items" SUM (quantity * unit_price)
total_discount = AGGREGATE "order_items" SUM (quantity * unit_price * discount_percent / 100)
net_revenue = total_revenue - total_discount
TALK "Gross revenue: $" + FORMAT(total_revenue, "#,##0.00")
TALK "Discounts: $" + FORMAT(total_discount, "#,##0.00")
TALK "Net revenue: $" + FORMAT(net_revenue, "#,##0.00")
```
---
## Conditional Aggregation
```basic
' Aggregate with different conditions
pending_total = AGGREGATE "orders" SUM amount WHERE status = "pending"
shipped_total = AGGREGATE "orders" SUM amount WHERE status = "shipped"
delivered_total = AGGREGATE "orders" SUM amount WHERE status = "delivered"
TALK "Order totals by status:"
TALK "- Pending: $" + FORMAT(pending_total, "#,##0.00")
TALK "- Shipped: $" + FORMAT(shipped_total, "#,##0.00")
TALK "- Delivered: $" + FORMAT(delivered_total, "#,##0.00")
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
total = AGGREGATE "orders" SUM amount WHERE customer_id = user.id
IF ERROR THEN
PRINT "Aggregation failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't calculate your totals."
ELSE IF total = 0 THEN
TALK "You haven't placed any orders yet."
ELSE
TALK "Your total purchases: $" + FORMAT(total, "#,##0.00")
END IF
```
### Common Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `INVALID_FIELD` | Field doesn't exist | Check field name spelling |
| `TYPE_ERROR` | Non-numeric field for SUM/AVG | Use numeric fields only |
| `EMPTY_COLLECTION` | No data to aggregate | Handle zero/null results |
| `TABLE_NOT_FOUND` | Table doesn't exist | Verify table name |
---
## Null Handling
```basic
' AGGREGATE ignores NULL values by default
avg_rating = AGGREGATE "products" AVERAGE rating
' NULL ratings are not included in the average
' Count non-null values
rated_count = AGGREGATE "products" COUNT WHERE rating IS NOT NULL
total_count = AGGREGATE "products" COUNT
TALK rated_count + " of " + total_count + " products have ratings"
```
---
## Performance Tips
1. **Use WHERE clauses** — Filter before aggregating for better performance
2. **Index aggregate fields** — Ensure database indexes on frequently aggregated columns
3. **Limit data scope** — Aggregate only the date range or subset needed
4. **Cache results** — Store aggregated values for expensive calculations
```basic
' Efficient: Filter first
total = AGGREGATE "orders" SUM amount WHERE date > "2025-01-01"
' Less efficient: Aggregate all, then filter
' all_orders = FIND "orders"
' recent = FILTER all_orders WHERE date > "2025-01-01"
' total = AGGREGATE recent SUM amount
```
---
## Configuration
Database connection is configured in `config.csv`:
```csv
name,value
database-provider,postgres
database-pool-size,10
database-timeout,30
```
Database credentials are stored in Vault, not in config files.
---
## Implementation Notes
- Implemented in Rust under `src/database/aggregate.rs`
- Uses SQL aggregate functions when querying tables
- Handles NULL values according to SQL standards
- Supports expressions in aggregate calculations
- Returns 0 for COUNT on empty sets, NULL for SUM/AVG/MIN/MAX
---
## Related Keywords
- [FIND](keyword-find.md) — Query data before aggregating
- [GROUP BY](keyword-group-by.md) — Group data before aggregating
- [FILTER](keyword-filter.md) — Filter in-memory collections
- [MAP](keyword-map.md) — Transform data before aggregating
---
## Summary
`AGGREGATE` calculates sums, counts, averages, and min/max values from data collections. Use it for dashboards, reports, and any situation where you need to summarize data. It works with both database tables (using SQL) and in-memory arrays. Always handle empty results and use WHERE clauses to improve performance on large datasets.

View file

@ -1 +1,277 @@
# COMPRESS
The `COMPRESS` keyword creates ZIP archives from files and directories in the bot's storage, enabling bots to bundle multiple files for download or transfer.
---
## Syntax
```basic
COMPRESS files TO "archive.zip"
result = COMPRESS files TO "archive.zip"
COMPRESS "folder/" TO "archive.zip"
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `files` | Array/String | List of file paths or a single folder path |
| `TO` | Clause | Destination path for the archive |
---
## Description
`COMPRESS` creates a ZIP archive containing the specified files or directory contents. The archive is stored in the bot's drive storage and can be downloaded, emailed, or transferred.
Use cases include:
- Bundling multiple documents for download
- Creating backups
- Packaging exports for users
- Archiving old files
- Preparing files for email attachments
---
## Examples
### Compress Multiple Files
```basic
' Create archive from list of files
files = ["report.pdf", "data.csv", "images/logo.png"]
COMPRESS files TO "package.zip"
TALK "Files compressed into package.zip"
```
### Compress a Folder
```basic
' Compress entire folder contents
COMPRESS "documents/project/" TO "project-backup.zip"
TALK "Project folder compressed"
```
### Compress with Result
```basic
' Get compression result details
result = COMPRESS files TO "exports/archive.zip"
TALK "Archive created: " + result.filename
TALK "Size: " + FORMAT(result.size / 1024, "#,##0") + " KB"
TALK "Files included: " + result.file_count
```
### Compress for Download
```basic
' Create archive and send to user
files = LIST "reports/" FILTER "*.pdf"
file_paths = []
FOR EACH file IN files
file_paths = APPEND(file_paths, "reports/" + file.name)
NEXT
result = COMPRESS file_paths TO "all-reports.zip"
DOWNLOAD "all-reports.zip" AS "Your Reports.zip"
TALK "Here are all your reports in a single download!"
```
### Compress with Timestamp
```basic
' Create dated archive
timestamp = FORMAT(NOW(), "YYYYMMDD-HHmmss")
archive_name = "backup-" + timestamp + ".zip"
COMPRESS "data/" TO "backups/" + archive_name
TALK "Backup created: " + archive_name
```
---
## Common Use Cases
### Create Document Package
```basic
' Bundle documents for a customer
customer_files = [
"contracts/" + customer_id + "/agreement.pdf",
"contracts/" + customer_id + "/terms.pdf",
"invoices/" + customer_id + "/latest.pdf"
]
result = COMPRESS customer_files TO "temp/customer-package.zip"
DOWNLOAD "temp/customer-package.zip" AS "Your Documents.zip"
TALK "Here's your complete document package!"
```
### Archive Old Data
```basic
' Archive and remove old files
old_files = LIST "logs/" FILTER "*" WHERE modified < DATEADD(NOW(), -90, "day")
file_paths = []
FOR EACH file IN old_files
file_paths = APPEND(file_paths, "logs/" + file.name)
NEXT
IF LEN(file_paths) > 0 THEN
archive_name = "logs-archive-" + FORMAT(NOW(), "YYYYMM") + ".zip"
COMPRESS file_paths TO "archives/" + archive_name
' Remove original files
FOR EACH path IN file_paths
DELETE FILE path
NEXT
TALK "Archived " + LEN(file_paths) + " old log files"
END IF
```
### Export User Data
```basic
' GDPR data export
user_folder = "users/" + user.id + "/"
COMPRESS user_folder TO "exports/user-data-" + user.id + ".zip"
link = DOWNLOAD "exports/user-data-" + user.id + ".zip" AS LINK
TALK "Your data export is ready: " + link
TALK "This link expires in 24 hours."
```
### Email Attachment Bundle
```basic
' Create attachment for email
attachments = [
"reports/summary.pdf",
"reports/details.xlsx",
"reports/charts.png"
]
COMPRESS attachments TO "temp/report-bundle.zip"
SEND MAIL recipient_email, "Monthly Report Bundle",
"Please find attached the complete monthly report package.",
"temp/report-bundle.zip"
TALK "Report bundle sent to " + recipient_email
```
---
## Return Value
Returns an object with archive details:
| Property | Description |
|----------|-------------|
| `result.path` | Full path to the archive |
| `result.filename` | Archive filename |
| `result.size` | Archive size in bytes |
| `result.file_count` | Number of files in archive |
| `result.created_at` | Creation timestamp |
---
## Error Handling
```basic
ON ERROR RESUME NEXT
result = COMPRESS files TO "archive.zip"
IF ERROR THEN
PRINT "Compression failed: " + ERROR_MESSAGE
IF INSTR(ERROR_MESSAGE, "not found") > 0 THEN
TALK "One or more files could not be found."
ELSE IF INSTR(ERROR_MESSAGE, "storage") > 0 THEN
TALK "Not enough storage space for the archive."
ELSE
TALK "Sorry, I couldn't create the archive. Please try again."
END IF
ELSE
TALK "Archive created successfully!"
END IF
```
### Common Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `FILE_NOT_FOUND` | Source file doesn't exist | Verify file paths |
| `STORAGE_FULL` | Insufficient space | Clean up storage |
| `EMPTY_ARCHIVE` | No files to compress | Check file list |
| `PERMISSION_DENIED` | Access blocked | Check permissions |
---
## Compression Options
The default compression uses standard ZIP format with deflate compression. This balances file size reduction with compatibility.
---
## Size Limits
| Limit | Default | Notes |
|-------|---------|-------|
| Max archive size | 500 MB | Configurable |
| Max files per archive | 10,000 | Practical limit |
| Max single file | 100 MB | Per file in archive |
---
## Configuration
No specific configuration required. Uses bot's standard drive settings from `config.csv`:
```csv
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
```
---
## Implementation Notes
- Implemented in Rust under `src/file/archive.rs`
- Uses standard ZIP format for compatibility
- Preserves directory structure in archive
- Supports recursive folder compression
- Progress tracking for large archives
- Atomic operation (creates temp file, then moves)
---
## Related Keywords
- [EXTRACT](keyword-extract.md) — Extract archive contents
- [LIST](keyword-list.md) — List files to compress
- [DOWNLOAD](keyword-download.md) — Send archive to user
- [DELETE FILE](keyword-delete-file.md) — Remove files after archiving
- [COPY](keyword-copy.md) — Copy files before archiving
---
## Summary
`COMPRESS` creates ZIP archives from files and folders. Use it to bundle documents for download, create backups, package exports, and prepare email attachments. The archive preserves directory structure and can be immediately downloaded or processed. Combine with `LIST` to dynamically select files and `DOWNLOAD` to deliver archives to users.

View file

@ -1 +1,174 @@
# COPY
The `COPY` keyword duplicates files within the bot's drive storage, creating copies in the same or different directories.
---
## Syntax
```basic
COPY "source" TO "destination"
result = COPY "source" TO "destination"
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `source` | String | Path to the file to copy |
| `destination` | String | Path for the new copy |
---
## Description
`COPY` creates a duplicate of a file in the bot's storage. The original file remains unchanged. If the destination directory doesn't exist, it's created automatically.
Use cases include:
- Creating backups before modifications
- Duplicating templates for new users
- Archiving files while keeping originals accessible
- Organizing files into multiple locations
---
## Examples
### Basic File Copy
```basic
' Copy a file to a new location
COPY "templates/report.docx" TO "user-reports/report-copy.docx"
TALK "File copied successfully!"
```
### Copy with Same Name
```basic
' Copy to different directory, keeping the same filename
COPY "documents/contract.pdf" TO "archive/contract.pdf"
```
### Copy Before Editing
```basic
' Create backup before modifying
COPY "config/settings.json" TO "config/settings.json.backup"
' Now safe to modify original
content = READ "config/settings.json"
modified = REPLACE(content, "old_value", "new_value")
WRITE modified TO "config/settings.json"
TALK "Settings updated. Backup saved."
```
### Copy Template for User
```basic
' Create user-specific copy of template
user_folder = "users/" + user.id
COPY "templates/welcome-kit.pdf" TO user_folder + "/welcome-kit.pdf"
TALK "Your welcome kit is ready!"
```
### Copy with Timestamp
```basic
' Create timestamped copy
timestamp = FORMAT(NOW(), "YYYYMMDD-HHmmss")
COPY "reports/daily.csv" TO "archive/daily-" + timestamp + ".csv"
TALK "Report archived with timestamp"
```
### Batch Copy
```basic
' Copy multiple files
files_to_copy = ["doc1.pdf", "doc2.pdf", "doc3.pdf"]
FOR EACH file IN files_to_copy
COPY "source/" + file TO "destination/" + file
NEXT
TALK "Copied " + LEN(files_to_copy) + " files"
```
---
## Return Value
Returns an object with copy details:
| Property | Description |
|----------|-------------|
| `result.source` | Original file path |
| `result.destination` | New file path |
| `result.size` | File size in bytes |
| `result.copied_at` | Timestamp of copy operation |
---
## Error Handling
```basic
ON ERROR RESUME NEXT
COPY "documents/important.pdf" TO "backup/important.pdf"
IF ERROR THEN
PRINT "Copy failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't copy that file."
ELSE
TALK "File copied successfully!"
END IF
```
### Common Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `FILE_NOT_FOUND` | Source doesn't exist | Verify source path |
| `PERMISSION_DENIED` | Access blocked | Check permissions |
| `DESTINATION_EXISTS` | File already exists | Use different name or delete first |
| `STORAGE_FULL` | No space available | Clean up storage |
---
## Behavior Notes
- **Overwrites by default**: If destination exists, it's replaced
- **Creates directories**: Parent folders created automatically
- **Preserves metadata**: File type and creation date preserved
- **Atomic operation**: Copy completes fully or not at all
---
## Configuration
No specific configuration required. Uses bot's standard drive settings from `config.csv`:
```csv
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
```
---
## Related Keywords
- [MOVE](keyword-move.md) — Move or rename files
- [DELETE FILE](keyword-delete-file.md) — Remove files
- [READ](keyword-read.md) — Read file contents
- [WRITE](keyword-write.md) — Write file contents
- [LIST](keyword-list.md) — List directory contents
---
## Summary
`COPY` creates duplicates of files in storage. Use it for backups, templates, archiving, and organizing files. The original file is preserved, and destination directories are created automatically.

View file

@ -1,158 +1,67 @@
# DELETE
# DELETE FILE
The `DELETE` keyword removes resources using dynamic path interpretation, similar to how `GET` works. The system automatically determines the appropriate operation based on the path provided.
> **Deprecated:** The `DELETE FILE` keyword has been unified into the [`DELETE`](keyword-delete.md) keyword. Use `DELETE` instead.
<img src="../assets/gb-decorative-header.svg" alt="General Bots" style="max-height: 100px; width: 100%; object-fit: contain;">
---
## Syntax
## Unified DELETE Keyword
The `DELETE` keyword now automatically detects file paths and handles file deletion:
```basic
DELETE path
DELETE path, options
' Delete a file - just use DELETE
DELETE "path/to/file.txt"
' DELETE auto-detects:
' - URLs → HTTP DELETE
' - table, filter → Database DELETE
' - path → File DELETE
```
## Dynamic Path Interpretation
---
Like `GET`, `DELETE` interprets the path and selects the appropriate engine:
## Migration
| Path Pattern | Operation |
|--------------|-----------|
| `/files/document.pdf` | Delete file from storage |
| `/users/user-id` | Delete user |
| `/tasks/task-id` | Delete task |
| `/projects/project-id` | Delete project |
| `https://api.example.com/items/123` | HTTP DELETE to external API |
### Old Syntax (Deprecated)
```basic
' Old way - no longer needed
DELETE FILE "temp/report.pdf"
```
### New Syntax (Recommended)
```basic
' New way - unified DELETE
DELETE "temp/report.pdf"
```
---
## Examples
### Delete a File
```basic
DELETE "/reports/old-report.pdf"
TALK "File deleted"
```
' Delete a temporary file
DELETE "temp/processed.csv"
### Delete from External API
' Delete uploaded file
DELETE "uploads/" + filename
```basic
DELETE "https://api.crm.com/contacts/12345"
```
### Delete with Condition
```basic
' Delete all files older than 30 days
files = LIST "/temp/"
FOR EACH file IN files
IF DATEDIFF("day", file.modified, NOW()) > 30 THEN
DELETE "/temp/" + file.name
END IF
NEXT file
```
### Delete a Task
```basic
DELETE "/tasks/" + task_id
TALK "Task removed"
```
### Delete a User
```basic
DELETE "/users/" + user_id
```
### Delete a Project
```basic
DELETE "/projects/" + project_id
```
## Options
Pass options as a second parameter for additional control:
```basic
' Soft delete (archive instead of permanent removal)
DELETE "/files/report.pdf", #{soft: true}
' Force delete (bypass confirmation)
DELETE "/files/temp/", #{force: true, recursive: true}
```
## Return Value
`DELETE` returns information about the operation:
```basic
result = DELETE "/files/document.pdf"
IF result.success THEN
TALK "Deleted: " + result.path
ELSE
TALK "Failed: " + result.error
' Delete with error handling
ON ERROR RESUME NEXT
DELETE "temp/large-file.pdf"
IF ERROR THEN
TALK "Could not delete file: " + ERROR MESSAGE
END IF
ON ERROR GOTO 0
```
## HTTP DELETE
When the path is a full URL, `DELETE` performs an HTTP DELETE request:
```basic
' Delete via REST API
DELETE "https://api.service.com/items/456"
' With authentication
SET HEADER "Authorization", "Bearer " + token
DELETE "https://api.service.com/items/456"
```
## Database Records
For database operations, use the `DELETE` keyword with table syntax:
```basic
' Delete specific records
DELETE "orders", "status = 'cancelled' AND created_at < '2024-01-01'"
' Delete by ID
DELETE "customers", "id = '" + customer_id + "'"
```
## Best Practices
**Verify before deleting.** Confirm the resource exists and the user has permission:
```basic
file = GET "/files/" + filename
IF file THEN
DELETE "/files/" + filename
ELSE
TALK "File not found"
END IF
```
**Use soft deletes for important data.** Archive rather than permanently remove:
```basic
' Move to archive instead of delete
MOVE "/active/" + filename, "/archive/" + filename
```
**Log deletions for audit trails:**
```basic
DELETE "/files/" + filename
INSERT "audit_log", #{
action: "delete",
path: filename,
user: user.id,
timestamp: NOW()
}
```
---
## See Also
- [GET](./keyword-get.md) - Dynamic resource retrieval
- [LIST](./keyword-list.md) - List resources before deletion
- [MOVE](./keyword-move.md) - Move instead of delete
- [DELETE](keyword-delete.md) — Unified delete keyword (HTTP, Database, File)
- [READ](keyword-read.md) — Read file contents
- [WRITE](keyword-write.md) — Write file contents
- [COPY](keyword-copy.md) — Copy files
- [MOVE](keyword-move.md) — Move/rename files

View file

@ -1 +1,61 @@
# DELETE HTTP
> **Deprecated:** The `DELETE HTTP` syntax is kept for backwards compatibility. Use the unified `DELETE` keyword instead, which auto-detects HTTP URLs.
---
## Redirect to DELETE
The `DELETE` keyword now automatically handles HTTP DELETE requests when given a URL:
```basic
' Preferred - unified DELETE
DELETE "https://api.example.com/resource/123"
' Also works (backwards compatibility)
DELETE HTTP "https://api.example.com/resource/123"
```
---
## See Also
- **[DELETE](keyword-delete.md)** — Unified delete keyword (recommended)
The unified `DELETE` keyword automatically detects:
- HTTP URLs → HTTP DELETE request
- Table + filter → Database delete
- File path → File delete
---
## Quick Example
```basic
' Set authentication header
SET HEADER "Authorization", "Bearer " + api_token
' Delete resource via API
DELETE "https://api.example.com/users/456"
' Clear headers
CLEAR HEADERS
TALK "User deleted"
```
---
## Migration
Replace this:
```basic
DELETE HTTP "https://api.example.com/resource/123"
```
With this:
```basic
DELETE "https://api.example.com/resource/123"
```
Both work, but the unified `DELETE` is cleaner and more intuitive.

View file

@ -1 +1,358 @@
# DELETE
The `DELETE` keyword is a unified command that automatically detects context and handles HTTP requests, database operations, and file deletions through a single interface.
---
## Syntax
```basic
' HTTP DELETE - auto-detected by URL
DELETE "https://api.example.com/resource/123"
' Database DELETE - table with filter
DELETE "table_name", "filter_condition"
' File DELETE - path without URL
DELETE "path/to/file.txt"
```
---
## Parameters
| Context | Parameter 1 | Parameter 2 | Description |
|---------|-------------|-------------|-------------|
| HTTP | URL (string) | - | DELETE request to the URL |
| Database | Table name | Filter condition | Delete matching records |
| File | File path | - | Delete the file |
---
## Description
`DELETE` is a smart, unified keyword that detects what you want to delete based on the arguments:
1. **HTTP DELETE**: If the first argument starts with `http://` or `https://`, sends an HTTP DELETE request
2. **Database DELETE**: If two arguments are provided (table, filter), performs SQL DELETE
3. **File DELETE**: Otherwise, treats the argument as a file path
This eliminates the need for separate `DELETE HTTP`, `DELETE FILE` commands - just use `DELETE`.
---
## Examples
### HTTP DELETE
```basic
' Delete a resource via REST API
DELETE "https://api.example.com/users/123"
TALK "User deleted from API"
```
```basic
' Delete with authentication (set headers first)
SET HEADER "Authorization", "Bearer " + api_token
DELETE "https://api.example.com/posts/" + post_id
CLEAR HEADERS
TALK "Post deleted"
```
### Database DELETE
```basic
' Delete by ID
DELETE "customers", "id = 123"
TALK "Customer deleted"
```
```basic
' Delete with variable
DELETE "orders", "id = " + order_id + " AND user_id = " + user.id
TALK "Order cancelled"
```
```basic
' Delete with multiple conditions
DELETE "sessions", "user_id = " + user.id + " AND status = 'expired'"
TALK "Expired sessions cleared"
```
```basic
' Delete old records
DELETE "logs", "created_at < '2024-01-01'"
TALK "Old logs purged"
```
### File DELETE
```basic
' Delete a file
DELETE "temp/report.pdf"
TALK "File deleted"
```
```basic
' Delete uploaded file
DELETE "uploads/" + filename
TALK "Upload removed"
```
---
## Common Use Cases
### REST API Resource Deletion
```basic
' Delete item from external service
TALK "Removing item from inventory system..."
SET HEADER "Authorization", "Bearer " + inventory_api_key
SET HEADER "Content-Type", "application/json"
result = DELETE "https://inventory.example.com/api/items/" + item_id
CLEAR HEADERS
IF result THEN
TALK "Item removed from inventory"
ELSE
TALK "Failed to remove item"
END IF
```
### User Account Deletion
```basic
' Complete account deletion flow
TALK "Are you sure you want to delete your account? Type 'DELETE' to confirm."
HEAR confirmation
IF confirmation = "DELETE" THEN
' Delete related records first
DELETE "orders", "customer_id = " + user.id
DELETE "addresses", "customer_id = " + user.id
DELETE "preferences", "user_id = " + user.id
' Delete the user
DELETE "users", "id = " + user.id
TALK "Your account has been deleted."
ELSE
TALK "Account deletion cancelled."
END IF
```
### Cleanup Temporary Files
```basic
' Clean up temp files after processing
temp_files = ["temp/doc1.pdf", "temp/doc2.pdf", "temp/merged.pdf"]
FOR EACH f IN temp_files
DELETE f
NEXT
TALK "Temporary files cleaned up"
```
### Cancel Order via API
```basic
' Cancel order in external system
order_api_url = "https://orders.example.com/api/orders/" + order_id
SET HEADER "Authorization", "Bearer " + api_key
DELETE order_api_url
CLEAR HEADERS
' Also remove from local database
DELETE "local_orders", "external_id = '" + order_id + "'"
TALK "Order cancelled"
```
### Remove Expired Data
```basic
' Scheduled cleanup task
' Delete expired tokens
DELETE "tokens", "expires_at < NOW()"
' Delete old notifications
DELETE "notifications", "read = true AND created_at < DATEADD(NOW(), -90, 'day')"
' Delete abandoned carts
DELETE "carts", "updated_at < DATEADD(NOW(), -7, 'day') AND checkout_completed = false"
TALK "Cleanup complete"
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
DELETE "orders", "id = " + order_id
IF ERROR THEN
error_msg = ERROR MESSAGE
IF INSTR(error_msg, "foreign key") > 0 THEN
TALK "Cannot delete: this record is referenced by other data."
ELSE IF INSTR(error_msg, "not found") > 0 THEN
TALK "Record not found."
ELSE IF INSTR(error_msg, "permission") > 0 THEN
TALK "You don't have permission to delete this."
ELSE
TALK "Delete failed: " + error_msg
END IF
ELSE
TALK "Deleted successfully!"
END IF
ON ERROR GOTO 0
```
### Common Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `FOREIGN_KEY_VIOLATION` | Database record referenced elsewhere | Delete child records first |
| `FILE_NOT_FOUND` | File doesn't exist | Check file path |
| `HTTP 404` | API resource not found | Verify URL and resource ID |
| `HTTP 401/403` | Authentication failed | Check API credentials |
| `PERMISSION_DENIED` | Insufficient privileges | Check permissions |
---
## Context Detection
The `DELETE` keyword automatically detects context:
| Argument Pattern | Detected As |
|------------------|-------------|
| `"https://..."` or `"http://..."` | HTTP DELETE |
| Two arguments: `"table", "filter"` | Database DELETE |
| Single argument without URL prefix | File DELETE |
```basic
' HTTP - starts with http/https
DELETE "https://api.example.com/resource/1"
' Database - two arguments
DELETE "users", "id = 123"
' File - single argument, no URL prefix
DELETE "temp/file.txt"
```
---
## Safety Considerations
### Always Use Filters for Database
```basic
' DANGEROUS - would delete all records!
' DELETE "users", ""
' SAFE - specific condition
DELETE "users", "id = " + user_id
```
### Verify Before Deleting
```basic
' Check record exists and belongs to user
record = FIND "documents", "id = " + doc_id + " AND owner_id = " + user.id
IF record THEN
DELETE "documents", "id = " + doc_id
TALK "Document deleted"
ELSE
TALK "Document not found or access denied"
END IF
```
### Confirm Destructive Actions
```basic
TALK "Delete " + item_name + "? This cannot be undone. Type 'yes' to confirm."
HEAR confirmation
IF LOWER(confirmation) = "yes" THEN
DELETE "items", "id = " + item_id
TALK "Deleted"
ELSE
TALK "Cancelled"
END IF
```
### Consider Soft Delete
```basic
' Instead of permanent delete, mark as deleted
UPDATE "records", #{ "deleted": true, "deleted_at": NOW() }, "id = " + record_id
TALK "Record archived (can be restored)"
```
---
## Return Values
| Context | Returns |
|---------|---------|
| HTTP | Response body as string |
| Database | Number of deleted rows |
| File | `true` on success, error message on failure |
---
## Configuration
No specific configuration required. Uses:
- HTTP: Standard HTTP client
- Database: Connection from `config.csv`
- Files: Bot's `.gbdrive` storage
---
## Implementation Notes
- Implemented in `data_operations.rs`
- Auto-detects URL vs table vs file
- HTTP DELETE supports custom headers via `SET HEADER`
- Database DELETE uses parameterized queries (SQL injection safe)
- File DELETE works within bot's storage sandbox
---
## Related Keywords
- [INSERT](keyword-insert.md) — Add new records
- [UPDATE](keyword-update.md) — Modify existing records
- [FIND](keyword-find.md) — Query records
- [POST](keyword-post.md) — HTTP POST requests
- [PUT](keyword-put.md) — HTTP PUT requests
- [READ](keyword-read.md) — Read file contents
- [WRITE](keyword-write.md) — Write file contents
---
## Summary
`DELETE` is a unified keyword that intelligently handles HTTP API deletions, database record removal, and file deletion through a single interface. It auto-detects context based on arguments: URLs trigger HTTP DELETE, table+filter triggers database DELETE, and paths trigger file DELETE. Always use filters for database operations, verify ownership before deleting user data, and confirm destructive actions. For recoverable deletions, consider soft delete instead.

View file

@ -1 +1,344 @@
# DOWNLOAD
The `DOWNLOAD` keyword retrieves files from the bot's storage and sends them to users or saves them to external locations, enabling bots to share documents, export data, and deliver files through chat channels.
---
## Syntax
```basic
DOWNLOAD "filename"
DOWNLOAD "filename" TO user
DOWNLOAD "filename" AS "display_name"
url = DOWNLOAD "filename" AS LINK
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `filename` | String | Path to the file in the bot's storage |
| `TO user` | Flag | Send file to specific user (default: current user) |
| `AS "name"` | String | Custom display name for the file |
| `AS LINK` | Flag | Return a download URL instead of sending file |
---
## Description
`DOWNLOAD` retrieves a file from the bot's configured storage (drive bucket) and delivers it to the user through their chat channel. It supports:
- Sending files directly in chat (WhatsApp, Telegram, web, etc.)
- Custom display names for downloaded files
- Generating shareable download links
- Sending files to specific users
- Automatic MIME type detection
The file path is relative to the bot's storage root. Use forward slashes for subdirectories.
---
## Examples
### Basic File Download
```basic
' Send a file to the current user
DOWNLOAD "documents/user-guide.pdf"
TALK "Here's the user guide you requested!"
```
### Download with Custom Name
```basic
' Send file with a friendly display name
DOWNLOAD "reports/rpt-2025-01.pdf" AS "January 2025 Report.pdf"
```
### Generate Download Link
```basic
' Get a shareable URL instead of sending directly
link = DOWNLOAD "exports/data.xlsx" AS LINK
TALK "Download your data here: " + link
' Link expires after 24 hours by default
```
### Send to Specific User
```basic
' Send file to a different user
DOWNLOAD "contracts/agreement.pdf" TO manager_email
TALK "I've sent the contract to your manager for review."
```
### Download After Processing
```basic
' Generate a report and send it
report_content = "# Sales Report\n\n" + sales_data
WRITE report_content TO "temp/report.md"
' Convert to PDF (if configured)
GENERATE PDF "temp/report.md" TO "temp/report.pdf"
DOWNLOAD "temp/report.pdf" AS "Sales Report.pdf"
TALK "Here's your sales report!"
```
---
## Common Use Cases
### Send Invoice
```basic
' Lookup and send customer invoice
invoice_path = "invoices/" + customer_id + "/" + invoice_id + ".pdf"
DOWNLOAD invoice_path AS "Invoice-" + invoice_id + ".pdf"
TALK "Here's your invoice. Let me know if you have any questions!"
```
### Export Data
```basic
' Export user's data to file and send
user_data = FIND "orders" WHERE customer_id = user.id
WRITE user_data TO "exports/user-" + user.id + "-orders.csv" AS TABLE
DOWNLOAD "exports/user-" + user.id + "-orders.csv" AS "My Orders.csv"
TALK "Here's a complete export of your order history."
```
### Share Meeting Notes
```basic
' Send meeting notes from earlier session
meeting_date = FORMAT(NOW(), "YYYY-MM-DD")
notes_file = "meetings/" + meeting_date + "-notes.md"
IF FILE_EXISTS(notes_file) THEN
DOWNLOAD notes_file AS "Meeting Notes - " + meeting_date + ".md"
TALK "Here are the notes from today's meeting!"
ELSE
TALK "I don't have any meeting notes for today."
END IF
```
### Provide Template
```basic
' Send a template file for user to fill out
TALK "I'll send you the application form. Please fill it out and send it back."
DOWNLOAD "templates/application-form.docx" AS "Application Form.docx"
```
### Generate and Share Report
```basic
' Create report on demand
TALK "Generating your monthly report..."
' Build report content
report = "# Monthly Summary\n\n"
report = report + "**Period:** " + month_name + " " + year + "\n\n"
report = report + "## Key Metrics\n\n"
report = report + "- Revenue: $" + FORMAT(revenue, "#,##0.00") + "\n"
report = report + "- Orders: " + order_count + "\n"
report = report + "- New Customers: " + new_customers + "\n"
' Save and send
filename = "reports/monthly-" + FORMAT(NOW(), "YYYYMM") + ".md"
WRITE report TO filename
DOWNLOAD filename AS "Monthly Report - " + month_name + ".md"
```
### Send Multiple Files
```basic
' Send several related files
files = ["contract.pdf", "terms.pdf", "schedule.pdf"]
TALK "I'm sending you the complete documentation package:"
FOR EACH file IN files
DOWNLOAD "documents/" + file
WAIT 1 ' Brief pause between files
NEXT
TALK "All documents sent! Please review and let me know if you have questions."
```
---
## Return Values
### Direct Download (default)
Returns a confirmation object:
| Property | Description |
|----------|-------------|
| `result.sent` | Boolean indicating success |
| `result.filename` | Name of file sent |
| `result.size` | File size in bytes |
### Download as Link
Returns a URL string:
```basic
link = DOWNLOAD "file.pdf" AS LINK
' Returns: "https://storage.example.com/download/abc123?expires=..."
```
---
## Channel-Specific Behavior
| Channel | Behavior |
|---------|----------|
| **WhatsApp** | Sends as document attachment |
| **Telegram** | Sends as document or media based on type |
| **Web Chat** | Triggers browser download |
| **Email** | Attaches to email message |
| **SMS** | Sends download link (files not supported) |
---
## File Type Handling
| File Type | Display |
|-----------|---------|
| PDF | Document with preview |
| Images | Inline image display |
| Audio | Audio player |
| Video | Video player |
| Other | Generic document icon |
```basic
' Images display inline in most channels
DOWNLOAD "photos/product.jpg"
' PDFs show with document preview
DOWNLOAD "docs/manual.pdf"
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
DOWNLOAD "reports/missing-file.pdf"
IF ERROR THEN
PRINT "Download failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't find that file. It may have been moved or deleted."
END IF
```
### Check File Exists First
```basic
files = LIST "invoices/" + customer_id + "/"
found = false
FOR EACH file IN files
IF file.name = invoice_id + ".pdf" THEN
found = true
EXIT FOR
END IF
NEXT
IF found THEN
DOWNLOAD "invoices/" + customer_id + "/" + invoice_id + ".pdf"
ELSE
TALK "Invoice not found. Please check the invoice number."
END IF
```
---
## Link Options
When using `AS LINK`, you can configure link behavior:
```basic
' Default link (expires in 24 hours)
link = DOWNLOAD "file.pdf" AS LINK
' Custom expiration (in config.csv)
' download-link-expiry,3600 (1 hour)
```
---
## Size Limits
| Limit | Default | Notes |
|-------|---------|-------|
| WhatsApp | 100 MB | Documents, 16 MB for media |
| Telegram | 50 MB | Standard, 2 GB for premium |
| Web Chat | No limit | Browser handles download |
| Email | 25 MB | Typical email limit |
```basic
' For large files, use link instead
file_info = LIST "exports/large-file.zip"
IF file_info[0].size > 50000000 THEN
link = DOWNLOAD "exports/large-file.zip" AS LINK
TALK "This file is large. Download it here: " + link
ELSE
DOWNLOAD "exports/large-file.zip"
END IF
```
---
## Configuration
Configure download settings in `config.csv`:
```csv
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
download-link-expiry,86400
download-link-base-url,https://files.mybot.com
download-max-size,104857600
```
---
## Implementation Notes
- Implemented in Rust under `src/file/mod.rs`
- Uses streaming for large file transfers
- Automatic MIME type detection
- Supports range requests for resumable downloads
- Files are served through secure signed URLs
- Access logging for audit trails
---
## Related Keywords
- [UPLOAD](keyword-upload.md) — Upload files to storage
- [READ](keyword-read.md) — Read file contents
- [WRITE](keyword-write.md) — Write content to files
- [LIST](keyword-list.md) — List files in storage
- [GENERATE PDF](keyword-generate-pdf.md) — Create PDF documents
---
## Summary
`DOWNLOAD` is essential for delivering files to users through chat. Use it to send invoices, share reports, provide templates, and export data. Combined with `AS LINK` for large files and custom display names, it provides flexible file delivery for any bot workflow.

View file

@ -1 +1,321 @@
# EXTRACT
The `EXTRACT` keyword unpacks ZIP archives to a specified destination in the bot's storage, enabling bots to process uploaded archives and access their contents.
---
## Syntax
```basic
EXTRACT "archive.zip" TO "destination/"
result = EXTRACT "archive.zip" TO "destination/"
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `archive` | String | Path to the ZIP archive to extract |
| `TO` | Clause | Destination folder for extracted contents |
---
## Description
`EXTRACT` unpacks a ZIP archive and places its contents in the specified destination folder. The folder is created automatically if it doesn't exist. Directory structure within the archive is preserved.
Use cases include:
- Processing user-uploaded archives
- Unpacking data imports
- Restoring backups
- Accessing bundled resources
- Handling bulk file uploads
---
## Examples
### Basic Extraction
```basic
' Extract archive to a folder
EXTRACT "uploads/documents.zip" TO "extracted/"
TALK "Archive extracted successfully"
```
### Extract with Result
```basic
' Get extraction details
result = EXTRACT "backup.zip" TO "restored/"
TALK "Extracted " + result.file_count + " files"
TALK "Total size: " + FORMAT(result.total_size / 1024, "#,##0") + " KB"
```
### Extract User Upload
```basic
' Handle uploaded archive from user
TALK "Please upload a ZIP file with your documents."
HEAR uploaded_file
IF uploaded_file.type = "application/zip" THEN
upload_result = UPLOAD uploaded_file TO "temp"
' Extract to user's folder
user_folder = "users/" + user.id + "/imports/" + FORMAT(NOW(), "YYYYMMDD") + "/"
result = EXTRACT upload_result.path TO user_folder
TALK "Extracted " + result.file_count + " files from your archive!"
' List extracted files
files = LIST user_folder
FOR EACH file IN files
TALK "- " + file.name
NEXT
ELSE
TALK "Please upload a ZIP file."
END IF
```
### Extract and Process
```basic
' Extract data files and process them
result = EXTRACT "imports/data-batch.zip" TO "temp/batch/"
csv_files = LIST "temp/batch/" FILTER "*.csv"
FOR EACH csv_file IN csv_files
data = READ "temp/batch/" + csv_file.name AS TABLE
' Process each row
FOR EACH row IN data
INSERT INTO "imports" WITH
source_file = csv_file.name,
data = row,
imported_at = NOW()
NEXT
TALK "Processed: " + csv_file.name
NEXT
' Clean up temp files
DELETE FILE "temp/batch/"
TALK "Import complete: processed " + LEN(csv_files) + " files"
```
### Restore Backup
```basic
' Restore from backup archive
TALK "Enter the backup filename to restore (e.g., backup-20250115.zip)"
HEAR backup_name
backup_path = "backups/" + backup_name
files = LIST "backups/"
found = false
FOR EACH file IN files
IF file.name = backup_name THEN
found = true
EXIT FOR
END IF
NEXT
IF found THEN
result = EXTRACT backup_path TO "restored/"
TALK "Backup restored: " + result.file_count + " files"
ELSE
TALK "Backup file not found. Available backups:"
FOR EACH file IN files
TALK "- " + file.name
NEXT
END IF
```
---
## Common Use Cases
### Bulk Document Upload
```basic
' Handle bulk document upload
TALK "Upload a ZIP file containing your documents."
HEAR archive
upload = UPLOAD archive TO "temp"
result = EXTRACT upload.path TO "documents/bulk-" + FORMAT(NOW(), "YYYYMMDDHHmmss") + "/"
TALK "Successfully uploaded " + result.file_count + " documents!"
' Clean up temp file
DELETE FILE upload.path
```
### Process Image Pack
```basic
' Extract and catalog images
result = EXTRACT "uploads/images.zip" TO "temp/images/"
images = LIST "temp/images/" FILTER "*.jpg,*.png,*.gif"
FOR EACH image IN images
' Move to permanent storage with organized path
MOVE "temp/images/" + image.name TO "media/images/" + image.name
' Record in database
INSERT INTO "media" WITH
filename = image.name,
path = "media/images/" + image.name,
size = image.size,
uploaded_at = NOW()
NEXT
TALK "Cataloged " + LEN(images) + " images"
```
### Template Installation
```basic
' Install a template pack
result = EXTRACT "templates/new-theme.zip" TO "themes/custom/"
TALK "Template installed with " + result.file_count + " files"
' Verify required files
required = ["style.css", "config.json", "templates/"]
missing = []
FOR EACH req IN required
files = LIST "themes/custom/" FILTER req
IF LEN(files) = 0 THEN
missing = APPEND(missing, req)
END IF
NEXT
IF LEN(missing) > 0 THEN
TALK "Warning: Missing required files: " + JOIN(missing, ", ")
ELSE
TALK "Template is complete and ready to use!"
END IF
```
---
## Return Value
Returns an object with extraction details:
| Property | Description |
|----------|-------------|
| `result.destination` | Destination folder path |
| `result.file_count` | Number of files extracted |
| `result.folder_count` | Number of folders created |
| `result.total_size` | Total size of extracted files |
| `result.files` | Array of extracted file paths |
| `result.extracted_at` | Extraction timestamp |
---
## Error Handling
```basic
ON ERROR RESUME NEXT
result = EXTRACT "uploads/data.zip" TO "extracted/"
IF ERROR THEN
PRINT "Extraction failed: " + ERROR_MESSAGE
IF INSTR(ERROR_MESSAGE, "corrupt") > 0 THEN
TALK "The archive appears to be corrupted. Please upload again."
ELSE IF INSTR(ERROR_MESSAGE, "not found") > 0 THEN
TALK "Archive file not found."
ELSE IF INSTR(ERROR_MESSAGE, "storage") > 0 THEN
TALK "Not enough storage space to extract the archive."
ELSE
TALK "Sorry, I couldn't extract the archive. Please try again."
END IF
ELSE
TALK "Extraction complete!"
END IF
```
### Common Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `FILE_NOT_FOUND` | Archive doesn't exist | Verify archive path |
| `INVALID_ARCHIVE` | Not a valid ZIP file | Check file format |
| `CORRUPT_ARCHIVE` | Archive is damaged | Request new upload |
| `STORAGE_FULL` | Insufficient space | Clean up storage |
| `PERMISSION_DENIED` | Access blocked | Check permissions |
---
## Security Considerations
- **Path validation**: Extracted paths are validated to prevent directory traversal attacks
- **Size limits**: Maximum extracted size is enforced to prevent storage exhaustion
- **File type filtering**: Executable files can be blocked if configured
- **Malware scanning**: Uploaded archives can be scanned before extraction
---
## Size Limits
| Limit | Default | Notes |
|-------|---------|-------|
| Max archive size | 100 MB | For uploaded archives |
| Max extracted size | 500 MB | Total after extraction |
| Max files | 10,000 | Files in archive |
| Max path depth | 10 | Nested folder depth |
---
## Configuration
No specific configuration required. Uses bot's standard drive settings from `config.csv`:
```csv
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
```
---
## Implementation Notes
- Implemented in Rust under `src/file/archive.rs`
- Supports standard ZIP format
- Preserves directory structure
- Handles nested folders
- Progress tracking for large archives
- Atomic extraction (temp folder, then move)
- Cleans up on failure
---
## Related Keywords
- [COMPRESS](keyword-compress.md) — Create ZIP archives
- [UPLOAD](keyword-upload.md) — Upload archives from users
- [LIST](keyword-list.md) — List extracted files
- [MOVE](keyword-move.md) — Organize extracted files
- [DELETE FILE](keyword-delete-file.md) — Clean up after extraction
---
## Summary
`EXTRACT` unpacks ZIP archives to a destination folder. Use it to process uploaded archives, restore backups, handle bulk imports, and access bundled resources. The archive's directory structure is preserved, and the destination folder is created automatically. Combine with `UPLOAD` to accept user archives and `LIST` to discover extracted contents.

View file

@ -1 +1,406 @@
# GENERATE PDF
The `GENERATE PDF` keyword creates PDF documents from HTML templates or Markdown content, enabling bots to produce professional reports, invoices, certificates, and other documents.
> **Note:** This keyword uses spaces, not underscores. Write `GENERATE PDF` not `GENERATE_PDF`.
---
## Syntax
```basic
result = GENERATE PDF template, data, "output.pdf"
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `template` | String | Path to HTML template or Markdown file |
| `data` | Object | Template variables to substitute |
| `output` | String | Output path for the generated PDF |
---
## Description
`GENERATE PDF` renders an HTML or Markdown template into a PDF document, substituting placeholders with provided values. The generated PDF is stored in the bot's drive storage and can be downloaded, emailed, or processed further.
Use cases include:
- Generating invoices and receipts
- Creating reports and summaries
- Producing certificates and credentials
- Building contracts and agreements
- Creating personalized documents
---
## Examples
### Basic PDF Generation
```basic
' Generate PDF from template with data
data = #{
"title": "Invoice",
"date": FORMAT(NOW(), "MMMM DD, YYYY")
}
result = GENERATE PDF "templates/invoice.html", data, "invoices/inv-001.pdf"
TALK "Invoice generated!"
```
### With Template Variables
```basic
' Generate PDF with data substitution
data = #{
"customer_name": customer.name,
"customer_email": customer.email,
"invoice_number": invoice_id,
"date": FORMAT(NOW(), "MMMM DD, YYYY"),
"items": order_items,
"subtotal": order_subtotal,
"tax": order_tax,
"total": order_total
}
result = GENERATE PDF "templates/invoice.html", data, "invoices/inv-" + invoice_id + ".pdf"
TALK "Invoice #" + invoice_id + " generated!"
```
### Generate and Download
```basic
' Create PDF and send to user
data = #{
"title": "Monthly Report",
"period": FORMAT(NOW(), "MMMM YYYY"),
"data": report_data
}
result = GENERATE PDF "templates/report.html", data, "temp/report.pdf"
DOWNLOAD result.url AS "Monthly Report.pdf"
TALK "Here's your report!"
```
### Generate and Email
```basic
' Create PDF and email it
data = #{
"party_a": company_name,
"party_b": customer_name,
"effective_date": FORMAT(NOW(), "MMMM DD, YYYY"),
"terms": contract_terms
}
result = GENERATE PDF "templates/contract.html", data, "contracts/" + contract_id + ".pdf"
SEND MAIL customer_email, "Your Contract",
"Please find attached your contract for review.",
[result.localName]
TALK "Contract sent to " + customer_email
```
---
## Template Format
### HTML Template
```html
<!DOCTYPE html>
<html>
<head>
<style>
body { font-family: Arial, sans-serif; }
.header { text-align: center; margin-bottom: 20px; }
.invoice-number { color: #666; }
table { width: 100%; border-collapse: collapse; }
th, td { border: 1px solid #ddd; padding: 8px; }
.total { font-weight: bold; font-size: 1.2em; }
</style>
</head>
<body>
<div class="header">
<h1>INVOICE</h1>
<p class="invoice-number">{{invoice_number}}</p>
</div>
<p><strong>Date:</strong> {{date}}</p>
<p><strong>Customer:</strong> {{customer_name}}</p>
<table>
<tr>
<th>Item</th>
<th>Quantity</th>
<th>Price</th>
</tr>
{{#each items}}
<tr>
<td>{{this.name}}</td>
<td>{{this.quantity}}</td>
<td>${{this.price}}</td>
</tr>
{{/each}}
</table>
<p class="total">Total: ${{total}}</p>
</body>
</html>
```
### Markdown Template
```markdown
# {{title}}
**Date:** {{date}}
**Prepared for:** {{customer_name}}
## Summary
{{summary}}
## Details
{{#each items}}
- **{{this.name}}:** {{this.description}}
{{/each}}
---
Generated by General Bots
```
---
## Template Placeholders
| Syntax | Description |
|--------|-------------|
| `{{variable}}` | Simple variable substitution |
| `{{#each items}}...{{/each}}` | Loop over array |
| `{{#if condition}}...{{/if}}` | Conditional rendering |
| `{{#unless condition}}...{{/unless}}` | Negative conditional |
| `{{this.property}}` | Access property in loop |
---
## Common Use Cases
### Invoice Generation
```basic
' Generate a complete invoice
items = FIND "order_items" WHERE order_id = order.id
data = #{
"invoice_number": "INV-" + FORMAT(order.id, "00000"),
"date": FORMAT(NOW(), "MMMM DD, YYYY"),
"due_date": FORMAT(DATEADD(NOW(), 30, "day"), "MMMM DD, YYYY"),
"customer_name": customer.name,
"customer_address": customer.address,
"items": items,
"subtotal": FORMAT(order.subtotal, "#,##0.00"),
"tax": FORMAT(order.tax, "#,##0.00"),
"total": FORMAT(order.total, "#,##0.00")
}
result = GENERATE PDF "templates/invoice.html", data, "invoices/" + order.id + ".pdf"
TALK "Invoice generated: " + result.localName
```
### Certificate Generation
```basic
' Generate completion certificate
data = #{
"recipient_name": user.name,
"course_name": course.title,
"completion_date": FORMAT(NOW(), "MMMM DD, YYYY"),
"certificate_id": GUID(),
"instructor_name": course.instructor
}
result = GENERATE PDF "templates/certificate.html", data, "certificates/" + user.id + "-" + course.id + ".pdf"
DOWNLOAD result.url AS "Certificate - " + course.title + ".pdf"
TALK "Congratulations! Here's your certificate!"
```
### Report Generation
```basic
' Generate monthly sales report
sales_data = FIND "sales" WHERE
date >= DATEADD(NOW(), -30, "day")
summary = AGGREGATE sales_data SUM amount
count = AGGREGATE sales_data COUNT
data = #{
"title": "Monthly Sales Report",
"period": FORMAT(NOW(), "MMMM YYYY"),
"total_sales": FORMAT(summary, "$#,##0.00"),
"transaction_count": count,
"sales_data": sales_data,
"generated_at": FORMAT(NOW(), "YYYY-MM-DD HH:mm")
}
result = GENERATE PDF "templates/sales-report.html", data, "reports/sales-" + FORMAT(NOW(), "YYYYMM") + ".pdf"
TALK "Sales report generated!"
```
### Contract Generation
```basic
' Generate service agreement
data = #{
"contract_number": contract_id,
"client_name": client.name,
"client_company": client.company,
"service_description": selected_service.description,
"monthly_fee": FORMAT(selected_service.price, "$#,##0.00"),
"start_date": FORMAT(start_date, "MMMM DD, YYYY"),
"term_months": contract_term,
"end_date": FORMAT(DATEADD(start_date, contract_term, "month"), "MMMM DD, YYYY")
}
result = GENERATE PDF "templates/service-agreement.html", data, "contracts/sa-" + contract_id + ".pdf"
TALK "Service agreement ready for signature!"
```
---
## Return Value
Returns an object with generation details:
| Property | Description |
|----------|-------------|
| `result.url` | Full URL to the generated PDF (S3/MinIO path) |
| `result.localName` | Local filename of the generated PDF |
---
## Error Handling
```basic
ON ERROR RESUME NEXT
data = #{
"customer_name": customer_name,
"total": order_total
}
result = GENERATE PDF "templates/invoice.html", data, "invoices/test.pdf"
IF ERROR THEN
TALK "PDF generation failed: " + ERROR MESSAGE
IF INSTR(ERROR MESSAGE, "template") > 0 THEN
TALK "Template file not found."
ELSE IF INSTR(ERROR MESSAGE, "storage") > 0 THEN
TALK "Not enough storage space."
ELSE
TALK "Sorry, I couldn't generate the document. Please try again."
END IF
ELSE
TALK "PDF generated successfully!"
END IF
ON ERROR GOTO 0
```
### Common Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `TEMPLATE_NOT_FOUND` | Template file doesn't exist | Verify template path |
| `INVALID_TEMPLATE` | Template has syntax errors | Check template format |
| `MISSING_VARIABLE` | Required placeholder not provided | Include all variables |
| `STORAGE_FULL` | Insufficient space | Clean up storage |
| `RENDER_ERROR` | HTML/CSS rendering issue | Simplify template |
---
## Styling Tips
### Supported CSS
- Basic typography (fonts, sizes, colors)
- Box model (margins, padding, borders)
- Tables and layouts
- Page breaks (`page-break-before`, `page-break-after`)
- Print media queries (`@media print`)
### Page Setup
```html
<style>
@page {
size: A4;
margin: 2cm;
}
.page-break {
page-break-after: always;
}
@media print {
.no-print { display: none; }
}
</style>
```
---
## Configuration
No specific configuration required. Uses bot's standard drive settings from `config.csv`:
```csv
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
```
---
## Implementation Notes
- Implemented in Rust under `src/file/pdf.rs`
- Uses headless browser rendering for HTML
- Supports embedded images (base64 or relative paths)
- Handles Unicode and special characters
- Maximum PDF size: 50 MB
- Template caching for performance
---
## Related Keywords
- [MERGE PDF](keyword-merge-pdf.md) — Combine multiple PDFs
- [FILL](keyword-fill.md) — Fill templates with data (alternative approach)
- [READ](keyword-read.md) — Read template content
- [DOWNLOAD](keyword-download.md) — Send PDF to user
- [SEND MAIL](keyword-send-mail.md) — Email PDF as attachment
- [WRITE](keyword-write.md) — Create template dynamically
---
## Summary
`GENERATE PDF` creates professional PDF documents from HTML or Markdown templates with variable substitution. Use it for invoices, reports, certificates, contracts, and any document that needs a polished format. Templates support loops, conditionals, and styling for flexible document generation. Combine with `DOWNLOAD` to deliver PDFs to users or `SEND MAIL` to email them as attachments.
> **Syntax reminder:** Always use `GENERATE PDF` (with space), not `GENERATE_PDF`.

View file

@ -1 +1,425 @@
# GRAPHQL
The `GRAPHQL` keyword executes GraphQL queries and mutations against external APIs, enabling bots to interact with modern GraphQL-based services.
---
## Syntax
```basic
result = GRAPHQL url, query
result = GRAPHQL url, query WITH variables
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `url` | String | The GraphQL endpoint URL |
| `query` | String | The GraphQL query or mutation |
| `WITH` | Clause | Optional variables for the query |
---
## Description
`GRAPHQL` sends queries and mutations to GraphQL APIs. GraphQL allows you to request exactly the data you need in a single request, making it efficient for complex data fetching. The keyword handles query formatting, variable substitution, and response parsing.
Use cases include:
- Fetching specific fields from APIs
- Creating, updating, or deleting data via mutations
- Querying nested relationships in one request
- Interacting with modern API platforms
---
## Examples
### Basic Query
```basic
' Simple query without variables
query = '
query {
users {
id
name
email
}
}
'
result = GRAPHQL "https://api.example.com/graphql", query
FOR EACH user IN result.data.users
TALK user.name + ": " + user.email
NEXT
```
### Query with Variables
```basic
' Query with variables
query = '
query GetUser($id: ID!) {
user(id: $id) {
id
name
email
orders {
id
total
status
}
}
}
'
result = GRAPHQL "https://api.example.com/graphql", query WITH id = user_id
TALK "User: " + result.data.user.name
TALK "Orders: " + LEN(result.data.user.orders)
```
### Mutation
```basic
' Create a new record
mutation = '
mutation CreateUser($name: String!, $email: String!) {
createUser(input: {name: $name, email: $email}) {
id
name
email
createdAt
}
}
'
result = GRAPHQL "https://api.example.com/graphql", mutation WITH
name = user_name,
email = user_email
TALK "User created with ID: " + result.data.createUser.id
```
### With Authentication
```basic
' Set authorization header for GraphQL
SET HEADER "Authorization", "Bearer " + api_token
query = '
query {
me {
id
name
role
}
}
'
result = GRAPHQL "https://api.example.com/graphql", query
SET HEADER "Authorization", ""
TALK "Logged in as: " + result.data.me.name
```
---
## Common Use Cases
### Fetch User Profile
```basic
' Get detailed user profile
query = '
query GetProfile($userId: ID!) {
user(id: $userId) {
id
name
email
avatar
settings {
theme
language
notifications
}
recentActivity {
action
timestamp
}
}
}
'
result = GRAPHQL api_url, query WITH userId = user.id
profile = result.data.user
TALK "Welcome back, " + profile.name + "!"
TALK "Theme: " + profile.settings.theme
```
### Search Products
```basic
' Search with filters
query = '
query SearchProducts($term: String!, $category: String, $limit: Int) {
products(search: $term, category: $category, first: $limit) {
edges {
node {
id
name
price
inStock
}
}
totalCount
}
}
'
result = GRAPHQL "https://api.store.com/graphql", query WITH
term = search_term,
category = selected_category,
limit = 10
products = result.data.products.edges
TALK "Found " + result.data.products.totalCount + " products:"
FOR EACH edge IN products
product = edge.node
TALK "- " + product.name + ": $" + product.price
NEXT
```
### Create Order
```basic
' Create order mutation
mutation = '
mutation CreateOrder($input: OrderInput!) {
createOrder(input: $input) {
id
orderNumber
total
status
estimatedDelivery
}
}
'
result = GRAPHQL "https://api.store.com/graphql", mutation WITH
input = '{"customerId": "' + customer_id + '", "items": ' + cart_items + '}'
order = result.data.createOrder
TALK "Order #" + order.orderNumber + " placed!"
TALK "Total: $" + order.total
TALK "Estimated delivery: " + order.estimatedDelivery
```
### Update Record
```basic
' Update mutation
mutation = '
mutation UpdateUser($id: ID!, $input: UserUpdateInput!) {
updateUser(id: $id, input: $input) {
id
name
email
updatedAt
}
}
'
result = GRAPHQL api_url, mutation WITH
id = user.id,
input = '{"name": "' + new_name + '", "email": "' + new_email + '"}'
TALK "Profile updated!"
```
### Delete Record
```basic
' Delete mutation
mutation = '
mutation DeleteItem($id: ID!) {
deleteItem(id: $id) {
success
message
}
}
'
result = GRAPHQL api_url, mutation WITH id = item_id
IF result.data.deleteItem.success THEN
TALK "Item deleted successfully"
ELSE
TALK "Delete failed: " + result.data.deleteItem.message
END IF
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
result = GRAPHQL api_url, query WITH id = resource_id
IF ERROR THEN
PRINT "GraphQL request failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't fetch that data. Please try again."
ELSE IF result.errors THEN
' GraphQL returned errors
FOR EACH err IN result.errors
PRINT "GraphQL error: " + err.message
NEXT
TALK "The request encountered an error: " + result.errors[0].message
ELSE
' Success
TALK "Data retrieved successfully!"
END IF
```
### Common Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `VALIDATION_ERROR` | Invalid query syntax | Check query format |
| `NOT_FOUND` | Resource doesn't exist | Verify ID/parameters |
| `UNAUTHORIZED` | Missing/invalid auth | Check authentication |
| `FORBIDDEN` | Insufficient permissions | Verify access rights |
| `VARIABLE_REQUIRED` | Missing required variable | Provide all variables |
---
## GraphQL vs REST
| Aspect | GraphQL | REST |
|--------|---------|------|
| **Data fetching** | Request exact fields | Fixed response structure |
| **Multiple resources** | Single request | Multiple requests |
| **Versioning** | Evolving schema | API versions (v1, v2) |
| **Use case** | Complex nested data | Simple CRUD operations |
```basic
' GraphQL - One request for nested data
query = '
query {
user(id: "123") {
name
orders {
items {
product { name }
}
}
}
}
'
result = GRAPHQL url, query
' REST equivalent would need multiple calls:
' GET /users/123
' GET /users/123/orders
' GET /orders/{id}/items for each order
' GET /products/{id} for each item
```
---
## Query Building Tips
### Request Only What You Need
```basic
' Good - request specific fields
query = '
query {
user(id: "123") {
name
email
}
}
'
' Avoid - requesting everything
' query {
' user(id: "123") {
' id name email phone address avatar settings ...
' }
' }
```
### Use Fragments for Reusable Fields
```basic
query = '
fragment UserFields on User {
id
name
email
}
query {
user(id: "123") {
...UserFields
}
users {
...UserFields
}
}
'
```
---
## Configuration
Configure HTTP settings in `config.csv`:
```csv
name,value
http-timeout,30
http-retry-count,3
```
API keys are stored in Vault:
```bash
vault kv put gbo/graphql/example api_key="your-api-key"
```
---
## Implementation Notes
- Implemented in Rust under `src/web_automation/graphql.rs`
- Sends POST requests with `application/json` content type
- Automatically formats query and variables
- Parses JSON response into accessible objects
- Supports custom headers via SET HEADER
- Handles both queries and mutations
---
## Related Keywords
- [POST](keyword-post.md) — REST POST requests
- [GET](keyword-get.md) — REST GET requests
- [SET HEADER](keyword-set-header.md) — Set authentication headers
- [SOAP](keyword-soap.md) — SOAP/XML web services
---
## Summary
`GRAPHQL` executes queries and mutations against GraphQL APIs. Use it when you need precise control over the data you fetch, especially for nested relationships. GraphQL is more efficient than REST for complex data needs, requiring fewer round trips. Always handle both network errors and GraphQL-specific errors in the response.

View file

@ -1 +1,328 @@
# INSERT
The `INSERT` keyword adds new records to database tables, enabling bots to store data collected from conversations and integrations.
---
## Syntax
```basic
INSERT INTO "table_name" WITH field1 = value1, field2 = value2
result = INSERT INTO "table_name" WITH field1 = value1, field2 = value2
INSERT INTO "table_name" ON connection WITH field1 = value1
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `table_name` | String | Name of the target database table |
| `WITH` | Clause | Field-value pairs for the new record |
| `ON connection` | String | Optional named database connection |
---
## Description
`INSERT` creates a new record in a database table. The `WITH` clause specifies the field names and values for the new row. The keyword returns the newly created record, including any auto-generated fields like `id`.
Use cases include:
- Storing user information collected during conversations
- Logging interactions and events
- Creating orders, tickets, or other business records
- Saving form submissions
---
## Examples
### Basic Insert
```basic
' Insert a new customer record
INSERT INTO "customers" WITH
name = "John Doe",
email = "john@example.com",
phone = "+1-555-0100"
TALK "Customer record created!"
```
### Insert with Return Value
```basic
' Insert and capture the new record
result = INSERT INTO "customers" WITH
name = customer_name,
email = customer_email,
created_at = NOW()
TALK "Customer created with ID: " + result.id
```
### Insert from Conversation
```basic
' Collect data from user and insert
TALK "What is your name?"
HEAR user_name
TALK "What is your email?"
HEAR user_email
TALK "What is your phone number?"
HEAR user_phone
result = INSERT INTO "contacts" WITH
name = user_name,
email = user_email,
phone = user_phone,
source = "chatbot",
created_at = NOW()
TALK "Thanks " + user_name + "! Your contact ID is " + result.id
```
### Insert Order
```basic
' Create a new order
result = INSERT INTO "orders" WITH
customer_id = user.id,
product_id = selected_product.id,
quantity = order_quantity,
total = selected_product.price * order_quantity,
status = "pending",
created_at = NOW()
TALK "Order #" + result.id + " created for $" + result.total
```
### Insert with Foreign Key
```basic
' Insert related records
customer = INSERT INTO "customers" WITH
name = customer_name,
email = customer_email
address = INSERT INTO "addresses" WITH
customer_id = customer.id,
street = street_address,
city = city_name,
postal_code = zip_code,
country = "US"
TALK "Customer and address saved!"
```
### Insert to Named Connection
```basic
' Insert to a specific database
INSERT INTO "audit_log" ON "analytics_db" WITH
event = "user_signup",
user_id = user.id,
timestamp = NOW(),
ip_address = session.ip
```
---
## Batch Insert
```basic
' Insert multiple records from a data source
new_contacts = READ "imports/contacts.csv" AS TABLE
inserted_count = 0
FOR EACH contact IN new_contacts
INSERT INTO "contacts" WITH
name = contact.name,
email = contact.email,
phone = contact.phone,
imported_at = NOW()
inserted_count = inserted_count + 1
NEXT
TALK "Imported " + inserted_count + " contacts"
```
---
## Common Use Cases
### Log User Interaction
```basic
' Log every conversation for analytics
INSERT INTO "conversation_logs" WITH
user_id = user.id,
session_id = session.id,
message = user_message,
response = bot_response,
timestamp = NOW()
```
### Create Support Ticket
```basic
' Create a support ticket from conversation
result = INSERT INTO "tickets" WITH
customer_id = user.id,
subject = ticket_subject,
description = ticket_description,
priority = "medium",
status = "open",
created_at = NOW()
TALK "Ticket #" + result.id + " created. Our team will respond within 24 hours."
```
### Save Form Submission
```basic
' Save a lead form submission
result = INSERT INTO "leads" WITH
first_name = form.first_name,
last_name = form.last_name,
email = form.email,
company = form.company,
interest = form.product_interest,
source = "website_chatbot",
created_at = NOW()
' Notify sales team
SEND MAIL "sales@company.com", "New Lead: " + form.first_name, "A new lead has been captured via chatbot."
```
### Record Event
```basic
' Record a business event
INSERT INTO "events" WITH
event_type = "purchase",
user_id = user.id,
data = '{"product_id": "' + product_id + '", "amount": ' + amount + '}',
occurred_at = NOW()
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
result = INSERT INTO "customers" WITH
name = customer_name,
email = customer_email
IF ERROR THEN
PRINT "Insert failed: " + ERROR_MESSAGE
IF INSTR(ERROR_MESSAGE, "duplicate") > 0 THEN
TALK "This email is already registered."
ELSE IF INSTR(ERROR_MESSAGE, "constraint") > 0 THEN
TALK "Please provide all required information."
ELSE
TALK "Sorry, I couldn't save your information. Please try again."
END IF
ELSE
TALK "Information saved successfully!"
END IF
```
### Common Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `DUPLICATE_KEY` | Unique constraint violated | Check for existing record first |
| `NOT_NULL_VIOLATION` | Required field missing | Include all required fields |
| `FOREIGN_KEY_VIOLATION` | Referenced record doesn't exist | Verify foreign key values |
| `CHECK_VIOLATION` | Value fails check constraint | Validate data before insert |
| `TABLE_NOT_FOUND` | Table doesn't exist | Verify table name |
---
## Validation Before Insert
```basic
' Validate data before inserting
IF LEN(email) < 5 OR INSTR(email, "@") = 0 THEN
TALK "Please provide a valid email address."
ELSE IF LEN(name) < 2 THEN
TALK "Please provide your full name."
ELSE
result = INSERT INTO "contacts" WITH
name = name,
email = email,
created_at = NOW()
TALK "Contact saved!"
END IF
```
---
## INSERT vs MERGE
| Keyword | Purpose | Use When |
|---------|---------|----------|
| `INSERT` | Create new record | Adding new data |
| `MERGE` | Insert or update | Record may already exist |
```basic
' INSERT - Always creates new record (may fail if duplicate)
INSERT INTO "users" WITH email = "john@example.com", name = "John"
' MERGE - Creates or updates based on key
MERGE INTO "users" ON email = "john@example.com" WITH
email = "john@example.com",
name = "John Updated"
```
---
## Configuration
Database connection is configured in `config.csv`:
```csv
name,value
database-provider,postgres
database-pool-size,10
database-timeout,30
```
Database credentials are stored in Vault, not in config files.
---
## Implementation Notes
- Implemented in Rust under `src/database/operations.rs`
- Uses parameterized queries to prevent SQL injection
- Auto-generates `id` if not specified (serial/UUID)
- Timestamps can be set with `NOW()` function
- Returns the complete inserted record including defaults
---
## Related Keywords
- [UPDATE](keyword-update.md) — Modify existing records
- [DELETE](keyword-delete.md) — Remove records
- [MERGE](keyword-merge.md) — Insert or update (upsert)
- [FIND](keyword-find.md) — Query records
- [TABLE](keyword-table.md) — Create tables
---
## Summary
`INSERT` creates new records in database tables. Use it to store user data, log events, create orders, and save form submissions. Always validate data before inserting and handle potential errors like duplicates and constraint violations. For cases where a record may already exist, consider using `MERGE` instead.

View file

@ -1 +1,159 @@
# KB COLLECTION STATS
The `KB COLLECTION STATS` keyword retrieves detailed statistics for a specific knowledge base collection, allowing granular monitoring of individual collections within the bot's KB.
---
## Syntax
```basic
stats = KB COLLECTION STATS "collection_name"
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `collection_name` | String | Name of the collection to query |
---
## Description
`KB COLLECTION STATS` queries Qdrant for detailed metrics about a specific collection. This is useful when you need information about a particular knowledge domain rather than the entire KB.
Returns a JSON object containing:
- Collection name
- Vector and point counts
- Storage metrics (disk and RAM)
- Segment information
- Index status
- Collection health status
---
## Return Value
Returns a JSON string with the following structure:
| Property | Type | Description |
|----------|------|-------------|
| `name` | String | Collection name |
| `vectors_count` | Number | Total vectors in collection |
| `points_count` | Number | Total points (documents) |
| `segments_count` | Number | Number of storage segments |
| `disk_data_size` | Number | Disk usage in bytes |
| `ram_data_size` | Number | RAM usage in bytes |
| `indexed_vectors_count` | Number | Vectors that are indexed |
| `status` | String | Collection status (green/yellow/red) |
---
## Examples
### Basic Collection Stats
```basic
' Get stats for a specific collection
stats_json = KB COLLECTION STATS "kb_products"
stats = PARSE_JSON(stats_json)
TALK "Products collection has " + stats.points_count + " documents"
TALK "Storage: " + FORMAT(stats.disk_data_size / 1024 / 1024, "#,##0.00") + " MB"
```
### Compare Multiple Collections
```basic
' Compare stats across collections
collections = ["kb_products", "kb_faqs", "kb_policies"]
TALK "Collection Statistics:"
FOR EACH coll_name IN collections
stats_json = KB COLLECTION STATS coll_name
stats = PARSE_JSON(stats_json)
disk_mb = stats.disk_data_size / 1024 / 1024
TALK " " + coll_name + ": " + stats.points_count + " docs, " + FORMAT(disk_mb, "#,##0.00") + " MB"
END FOR
```
### Collection Health Monitoring
```basic
' Check if collection is healthy
stats_json = KB COLLECTION STATS collection_name
stats = PARSE_JSON(stats_json)
IF stats.status = "green" THEN
TALK "Collection " + collection_name + " is healthy"
ELSE IF stats.status = "yellow" THEN
TALK "Warning: Collection " + collection_name + " needs optimization"
ELSE
TALK "Error: Collection " + collection_name + " has issues - status: " + stats.status
END IF
```
### Index Coverage Check
```basic
' Verify all vectors are indexed
stats_json = KB COLLECTION STATS "kb_main"
stats = PARSE_JSON(stats_json)
index_coverage = (stats.indexed_vectors_count / stats.vectors_count) * 100
IF index_coverage < 100 THEN
TALK "Warning: Only " + FORMAT(index_coverage, "#0.0") + "% of vectors are indexed"
TALK "Search performance may be degraded"
ELSE
TALK "All vectors are fully indexed"
END IF
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
stats_json = KB COLLECTION STATS "kb_" + collection_name
IF ERROR THEN
IF INSTR(ERROR_MESSAGE, "not found") > 0 THEN
TALK "Collection '" + collection_name + "' does not exist"
ELSE
TALK "Error retrieving collection stats: " + ERROR_MESSAGE
END IF
ELSE
stats = PARSE_JSON(stats_json)
TALK "Collection has " + stats.points_count + " documents"
END IF
```
---
## Related Keywords
- [KB STATISTICS](keyword-kb-statistics.md) — Get overall KB statistics
- [KB LIST COLLECTIONS](keyword-kb-list-collections.md) — List all collections
- [KB DOCUMENTS COUNT](keyword-kb-documents-count.md) — Get total document count
- [KB STORAGE SIZE](keyword-kb-storage-size.md) — Get storage usage
---
## Implementation Notes
- Implemented in Rust under `src/basic/keywords/kb_statistics.rs`
- Queries Qdrant REST API at `/collections/{name}`
- Collection name should match exactly (case-sensitive)
- Returns empty if collection doesn't exist
---
## Summary
`KB COLLECTION STATS` provides detailed metrics for a specific knowledge base collection. Use it for granular monitoring, comparing collections, or checking health of individual knowledge domains. For overall KB statistics, use `KB STATISTICS` instead.

View file

@ -1 +1,201 @@
# KB DOCUMENTS ADDED SINCE
The `KB DOCUMENTS ADDED SINCE` keyword returns the count of documents added to the knowledge base within a specified number of days, useful for tracking ingestion activity and monitoring growth.
---
## Syntax
```basic
count = KB DOCUMENTS ADDED SINCE days
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `days` | Number | Number of days to look back |
---
## Description
`KB DOCUMENTS ADDED SINCE` queries the database to count how many documents were added to the bot's knowledge base within the specified time window. This is useful for tracking ingestion rates, monitoring content growth, and generating activity reports.
Use cases include:
- Tracking daily/weekly document ingestion
- Monitoring automated content pipelines
- Activity reports and dashboards
- Alert systems for low/high activity
- Growth trend analysis
---
## Return Value
Returns an integer representing the number of documents added within the specified period.
---
## Examples
### Basic Usage
```basic
' Count documents added in last 7 days
weekly_count = KB DOCUMENTS ADDED SINCE 7
TALK "Documents added this week: " + weekly_count
```
### Daily Activity Check
```basic
' Check today's ingestion
today_count = KB DOCUMENTS ADDED SINCE 1
IF today_count = 0 THEN
TALK "No new documents added today"
ELSE
TALK today_count + " documents added today"
END IF
```
### Growth Comparison
```basic
' Compare recent activity periods
last_week = KB DOCUMENTS ADDED SINCE 7
last_month = KB DOCUMENTS ADDED SINCE 30
weekly_average = last_month / 4
IF last_week > weekly_average * 1.5 THEN
TALK "Document ingestion is above average this week!"
ELSE IF last_week < weekly_average * 0.5 THEN
TALK "Document ingestion is below average this week"
ELSE
TALK "Document ingestion is on track"
END IF
```
### Activity Alert System
```basic
' Alert if no documents added recently
recent_docs = KB DOCUMENTS ADDED SINCE 3
IF recent_docs = 0 THEN
SEND_MAIL admin_email,
"KB Activity Alert",
"No documents have been added to the knowledge base in the last 3 days. Please check content pipelines.",
[]
TALK "Alert sent - no recent KB activity"
END IF
```
### Scheduled Activity Report
```basic
' Weekly ingestion report (run via SET SCHEDULE)
day_1 = KB DOCUMENTS ADDED SINCE 1
day_7 = KB DOCUMENTS ADDED SINCE 7
day_30 = KB DOCUMENTS ADDED SINCE 30
report = "KB Ingestion Report\n\n"
report = report + "Last 24 hours: " + day_1 + " documents\n"
report = report + "Last 7 days: " + day_7 + " documents\n"
report = report + "Last 30 days: " + day_30 + " documents\n"
report = report + "\nDaily average (30 days): " + FORMAT(day_30 / 30, "#,##0.0") + "\n"
report = report + "Weekly average (30 days): " + FORMAT(day_30 / 4, "#,##0.0")
SEND_MAIL admin_email, "Weekly KB Ingestion Report", report, []
```
### Pipeline Monitoring
```basic
' Monitor automated document pipeline
expected_daily = 50 ' Expected documents per day
tolerance = 0.2 ' 20% tolerance
yesterday_count = KB DOCUMENTS ADDED SINCE 1
min_expected = expected_daily * (1 - tolerance)
max_expected = expected_daily * (1 + tolerance)
IF yesterday_count < min_expected THEN
TALK "Warning: Only " + yesterday_count + " documents ingested yesterday (expected ~" + expected_daily + ")"
LOG_WARN "Low document ingestion: " + yesterday_count
ELSE IF yesterday_count > max_expected THEN
TALK "Note: High ingestion yesterday - " + yesterday_count + " documents"
LOG_INFO "High document ingestion: " + yesterday_count
ELSE
TALK "Document pipeline operating normally: " + yesterday_count + " documents yesterday"
END IF
```
---
## Use with Other KB Keywords
```basic
' Comprehensive KB activity check
total_docs = KB DOCUMENTS COUNT
recent_docs = KB DOCUMENTS ADDED SINCE 7
storage_mb = KB STORAGE SIZE
TALK "Knowledge Base Status:"
TALK " Total documents: " + FORMAT(total_docs, "#,##0")
TALK " Added this week: " + recent_docs
TALK " Storage used: " + FORMAT(storage_mb, "#,##0.00") + " MB"
IF recent_docs > 0 THEN
pct_new = (recent_docs / total_docs) * 100
TALK " " + FORMAT(pct_new, "#,##0.0") + "% of KB is from this week"
END IF
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
count = KB DOCUMENTS ADDED SINCE 7
IF ERROR THEN
PRINT "Failed to get document count: " + ERROR_MESSAGE
count = 0
END IF
TALK "Documents added recently: " + count
```
---
## Related Keywords
- [KB STATISTICS](keyword-kb-statistics.md) — Comprehensive KB statistics
- [KB DOCUMENTS COUNT](keyword-kb-documents-count.md) — Total document count
- [KB COLLECTION STATS](keyword-kb-collection-stats.md) — Per-collection statistics
- [KB STORAGE SIZE](keyword-kb-storage-size.md) — Storage usage
- [KB LIST COLLECTIONS](keyword-kb-list-collections.md) — List collections
---
## Implementation Notes
- Implemented in Rust under `src/basic/keywords/kb_statistics.rs`
- Queries PostgreSQL `kb_documents` table by `created_at` timestamp
- Filters by current bot ID
- Returns 0 if no documents found or on error
- Days parameter is converted to interval for SQL query
---
## Summary
`KB DOCUMENTS ADDED SINCE` provides a simple way to track recent document ingestion activity. Use it for monitoring content pipelines, generating activity reports, and creating alerts for unusual activity levels. Combine with other KB keywords for comprehensive knowledge base monitoring.

View file

@ -1 +1,162 @@
# KB DOCUMENTS COUNT
The `KB DOCUMENTS COUNT` keyword returns the total number of documents stored in the bot's knowledge base.
---
## Syntax
```basic
count = KB DOCUMENTS COUNT
```
---
## Parameters
None. Returns the count for the current bot's knowledge base.
---
## Description
`KB DOCUMENTS COUNT` queries the database to return the total number of documents that have been indexed in the bot's knowledge base. This is a lightweight operation compared to `KB STATISTICS` when you only need the document count.
Use cases include:
- Checking if knowledge base has content
- Displaying document counts in conversations
- Conditional logic based on KB size
- Simple monitoring and logging
---
## Return Value
Returns an integer representing the total document count. Returns `0` if no documents exist or if an error occurs.
---
## Examples
### Basic Count Check
```basic
' Check how many documents are in KB
doc_count = KB DOCUMENTS COUNT
TALK "The knowledge base contains " + doc_count + " documents."
```
### Conditional KB Usage
```basic
' Only use KB if it has content
doc_count = KB DOCUMENTS COUNT
IF doc_count > 0 THEN
USE KB
answer = SEARCH user_question
TALK answer
ELSE
TALK "The knowledge base is empty. Please add some documents first."
END IF
```
### Admin Status Report
```basic
' Quick status check for administrators
doc_count = KB DOCUMENTS COUNT
IF doc_count = 0 THEN
status = "⚠️ Empty - No documents indexed"
ELSE IF doc_count < 10 THEN
status = "📄 Minimal - " + doc_count + " documents"
ELSE IF doc_count < 100 THEN
status = "📚 Growing - " + doc_count + " documents"
ELSE
status = "✅ Robust - " + doc_count + " documents"
END IF
TALK "Knowledge Base Status: " + status
```
### Monitoring Growth
```basic
' Log document count for tracking
doc_count = KB DOCUMENTS COUNT
timestamp = FORMAT(NOW(), "YYYY-MM-DD HH:mm")
PRINT "[" + timestamp + "] KB document count: " + doc_count
' Store for trending
INSERT "kb_count_log", #{
"timestamp": NOW(),
"count": doc_count
}
```
### Before/After Import Check
```basic
' Check count before and after document import
before_count = KB DOCUMENTS COUNT
' Import new documents
IMPORT "new-documents.zip"
after_count = KB DOCUMENTS COUNT
added = after_count - before_count
TALK "Import complete! Added " + added + " new documents."
TALK "Total documents now: " + after_count
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
count = KB DOCUMENTS COUNT
IF ERROR THEN
PRINT "Error getting document count: " + ERROR_MESSAGE
count = 0
END IF
IF count > 0 THEN
TALK "Found " + count + " documents in the knowledge base."
ELSE
TALK "No documents found or unable to query knowledge base."
END IF
```
---
## Related Keywords
- [KB STATISTICS](keyword-kb-statistics.md) — Get comprehensive KB statistics
- [KB DOCUMENTS ADDED SINCE](keyword-kb-documents-added-since.md) — Count recently added documents
- [KB STORAGE SIZE](keyword-kb-storage-size.md) — Get storage usage
- [KB LIST COLLECTIONS](keyword-kb-list-collections.md) — List all collections
- [CLEAR KB](keyword-clear-kb.md) — Clear knowledge base
- [USE KB](keyword-use-kb.md) — Enable KB for queries
---
## Implementation Notes
- Implemented in Rust under `src/basic/keywords/kb_statistics.rs`
- Queries PostgreSQL `kb_documents` table
- Filters by current bot ID
- Returns 0 on error (does not throw)
- Very fast operation (single COUNT query)
---
## Summary
`KB DOCUMENTS COUNT` provides a quick way to get the total number of documents in the knowledge base. Use it for simple checks, conditional logic, and lightweight monitoring. For more detailed statistics, use `KB STATISTICS` instead.

View file

@ -1 +1,259 @@
# KB LIST COLLECTIONS
The `KB LIST COLLECTIONS` keyword returns a list of all knowledge base collection names associated with the current bot.
---
## Syntax
```basic
collections = KB LIST COLLECTIONS
```
---
## Parameters
None. Returns collections for the current bot.
---
## Description
`KB LIST COLLECTIONS` queries Qdrant to retrieve all collection names that belong to the current bot. Collections are filtered by the bot ID prefix (`kb_{bot_id}`), returning only collections owned by the calling bot.
Use cases include:
- Discovering available knowledge domains
- Building dynamic collection selection interfaces
- Admin dashboards and monitoring
- Iterating over collections for batch operations
- Validating collection existence before operations
---
## Return Value
Returns an array of collection name strings. Returns an empty array if no collections exist.
Example return value:
```basic
["kb_products", "kb_faqs", "kb_policies", "kb_support"]
```
---
## Examples
### Basic Collection Listing
```basic
' List all KB collections
collections = KB LIST COLLECTIONS
TALK "Available knowledge bases:"
FOR EACH coll IN collections
TALK " - " + coll
END FOR
```
### Check Collection Existence
```basic
' Verify a collection exists before using it
collections = KB LIST COLLECTIONS
target_collection = "kb_products"
found = false
FOR EACH coll IN collections
IF coll = target_collection THEN
found = true
EXIT FOR
END IF
END FOR
IF found THEN
TALK "Products knowledge base is available"
USE KB target_collection
ELSE
TALK "Products knowledge base not found"
END IF
```
### Admin Collection Overview
```basic
' Generate overview of all collections
collections = KB LIST COLLECTIONS
IF LEN(collections) = 0 THEN
TALK "No knowledge base collections found."
ELSE
TALK "Found " + LEN(collections) + " collections:"
FOR EACH coll IN collections
stats_json = KB COLLECTION STATS coll
stats = PARSE_JSON(stats_json)
disk_mb = stats.disk_data_size / 1024 / 1024
TALK " " + coll + ": " + stats.points_count + " docs (" + FORMAT(disk_mb, "#,##0.00") + " MB)"
END FOR
END IF
```
### Dynamic Collection Selection
```basic
' Let user choose a knowledge base
collections = KB LIST COLLECTIONS
TALK "Which knowledge base would you like to search?"
TALK "Available options:"
idx = 1
FOR EACH coll IN collections
' Remove kb_ prefix for display
display_name = REPLACE(coll, "kb_", "")
TALK idx + ". " + display_name
idx = idx + 1
END FOR
HEAR choice AS NUMBER
IF choice > 0 AND choice <= LEN(collections) THEN
selected = collections[choice - 1]
USE KB selected
TALK "Now searching in: " + selected
ELSE
TALK "Invalid selection"
END IF
```
### Batch Operations on All Collections
```basic
' Get stats for all collections
collections = KB LIST COLLECTIONS
total_docs = 0
total_size = 0
FOR EACH coll IN collections
stats_json = KB COLLECTION STATS coll
stats = PARSE_JSON(stats_json)
total_docs = total_docs + stats.points_count
total_size = total_size + stats.disk_data_size
END FOR
TALK "Across " + LEN(collections) + " collections:"
TALK " Total documents: " + FORMAT(total_docs, "#,##0")
TALK " Total size: " + FORMAT(total_size / 1024 / 1024, "#,##0.00") + " MB"
```
### Collection Health Check
```basic
' Check health of all collections
collections = KB LIST COLLECTIONS
issues = []
FOR EACH coll IN collections
stats_json = KB COLLECTION STATS coll
stats = PARSE_JSON(stats_json)
IF stats.status <> "green" THEN
issues = issues + [coll + " (" + stats.status + ")"]
END IF
END FOR
IF LEN(issues) > 0 THEN
TALK "Collections with issues:"
FOR EACH issue IN issues
TALK " ⚠️ " + issue
END FOR
ELSE
TALK "✅ All " + LEN(collections) + " collections are healthy"
END IF
```
### Collection-Based Routing
```basic
' Route query to appropriate collection based on topic
collections = KB LIST COLLECTIONS
' Determine best collection for user's question
topic = LLM "Classify this question into one category: products, support, policies, or general. Question: " + user_question
topic = TRIM(LOWER(topic))
target = "kb_" + topic
' Check if collection exists
collection_found = false
FOR EACH coll IN collections
IF coll = target THEN
collection_found = true
EXIT FOR
END IF
END FOR
IF collection_found THEN
USE KB target
answer = SEARCH user_question
ELSE
' Fall back to searching all collections
USE KB
answer = SEARCH user_question
END IF
TALK answer
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
collections = KB LIST COLLECTIONS
IF ERROR THEN
PRINT "Failed to list collections: " + ERROR_MESSAGE
collections = []
END IF
IF LEN(collections) = 0 THEN
TALK "No knowledge base collections available"
ELSE
TALK "Found " + LEN(collections) + " knowledge base collections"
END IF
```
---
## Related Keywords
- [KB STATISTICS](keyword-kb-statistics.md) — Comprehensive KB statistics
- [KB COLLECTION STATS](keyword-kb-collection-stats.md) — Stats for specific collection
- [KB DOCUMENTS COUNT](keyword-kb-documents-count.md) — Total document count
- [KB STORAGE SIZE](keyword-kb-storage-size.md) — Storage usage in MB
- [USE KB](keyword-use-kb.md) — Enable KB for queries
- [CLEAR KB](keyword-clear-kb.md) — Clear knowledge base content
---
## Implementation Notes
- Implemented in Rust under `src/basic/keywords/kb_statistics.rs`
- Queries Qdrant REST API at `/collections`
- Filters results by bot ID prefix (`kb_{bot_id}`)
- Returns array of Dynamic strings for easy iteration
- Empty array returned if no collections or on error
- Collection names include the full prefix (e.g., `kb_products`)
---
## Summary
`KB LIST COLLECTIONS` provides a way to discover all knowledge base collections belonging to the current bot. Use it for dynamic collection selection, admin dashboards, batch operations, or validating collection existence before performing operations. Combine with `KB COLLECTION STATS` to get detailed information about each collection.

View file

@ -1 +1,274 @@
# KB STATISTICS
The `KB STATISTICS` keyword retrieves comprehensive statistics about the bot's knowledge base, including document counts, vector counts, storage usage, and collection information from the Qdrant vector database.
---
## Syntax
```basic
stats = KB STATISTICS
```
---
## Parameters
None. Returns statistics for the current bot's knowledge base.
---
## Description
`KB STATISTICS` queries the Qdrant vector database to gather comprehensive metrics about the bot's knowledge base. This is useful for monitoring KB health, planning capacity, generating admin reports, and tracking document ingestion over time.
The keyword returns a JSON object containing:
- Total collections count
- Total documents across all collections
- Total vectors stored
- Disk and RAM usage
- Documents added in the last week/month
- Per-collection statistics
Use cases include:
- Admin dashboards and monitoring
- Capacity planning and alerts
- Usage reporting and analytics
- Knowledge base health checks
- Cost tracking for vector storage
---
## Return Value
Returns a JSON string with the following structure:
| Property | Type | Description |
|----------|------|-------------|
| `total_collections` | Number | Number of KB collections for this bot |
| `total_documents` | Number | Total document count across collections |
| `total_vectors` | Number | Total vectors stored in Qdrant |
| `total_disk_size_mb` | Number | Disk storage usage in MB |
| `total_ram_size_mb` | Number | RAM usage in MB |
| `documents_added_last_week` | Number | Documents added in past 7 days |
| `documents_added_last_month` | Number | Documents added in past 30 days |
| `collections` | Array | Detailed stats per collection |
### Collection Stats Object
Each collection in the `collections` array contains:
| Property | Type | Description |
|----------|------|-------------|
| `name` | String | Collection name |
| `vectors_count` | Number | Vectors in this collection |
| `points_count` | Number | Points (documents) count |
| `segments_count` | Number | Storage segments |
| `disk_data_size` | Number | Disk size in bytes |
| `ram_data_size` | Number | RAM size in bytes |
| `indexed_vectors_count` | Number | Indexed vectors |
| `status` | String | Collection status (green/yellow/red) |
---
## Examples
### Basic Statistics Retrieval
```basic
' Get KB statistics
stats_json = KB STATISTICS
' Parse the JSON response
stats = PARSE_JSON(stats_json)
TALK "Your knowledge base has:"
TALK " - " + stats.total_documents + " documents"
TALK " - " + stats.total_vectors + " vectors"
TALK " - " + FORMAT(stats.total_disk_size_mb, "#,##0.00") + " MB on disk"
```
### Admin Dashboard Report
```basic
' Generate KB health report for administrators
stats_json = KB STATISTICS
stats = PARSE_JSON(stats_json)
report = "## Knowledge Base Report\n\n"
report = report + "**Generated:** " + FORMAT(NOW(), "YYYY-MM-DD HH:mm") + "\n\n"
report = report + "### Summary\n"
report = report + "- Collections: " + stats.total_collections + "\n"
report = report + "- Total Documents: " + FORMAT(stats.total_documents, "#,##0") + "\n"
report = report + "- Total Vectors: " + FORMAT(stats.total_vectors, "#,##0") + "\n"
report = report + "- Disk Usage: " + FORMAT(stats.total_disk_size_mb, "#,##0.00") + " MB\n"
report = report + "- RAM Usage: " + FORMAT(stats.total_ram_size_mb, "#,##0.00") + " MB\n\n"
report = report + "### Recent Activity\n"
report = report + "- Added this week: " + stats.documents_added_last_week + "\n"
report = report + "- Added this month: " + stats.documents_added_last_month + "\n"
TALK report
```
### Storage Alert System
```basic
' Check KB storage and alert if threshold exceeded
stats_json = KB STATISTICS
stats = PARSE_JSON(stats_json)
storage_threshold_mb = 1000 ' 1 GB warning threshold
critical_threshold_mb = 5000 ' 5 GB critical threshold
IF stats.total_disk_size_mb > critical_threshold_mb THEN
SEND_MAIL admin_email,
"CRITICAL: KB Storage Alert",
"Knowledge base storage is at " + FORMAT(stats.total_disk_size_mb, "#,##0") + " MB. Immediate action required.",
[]
TALK "Critical storage alert sent to administrator"
ELSE IF stats.total_disk_size_mb > storage_threshold_mb THEN
SEND_MAIL admin_email,
"Warning: KB Storage Growing",
"Knowledge base storage is at " + FORMAT(stats.total_disk_size_mb, "#,##0") + " MB. Consider cleanup.",
[]
TALK "Storage warning sent to administrator"
ELSE
TALK "Storage levels are healthy: " + FORMAT(stats.total_disk_size_mb, "#,##0") + " MB"
END IF
```
### Collection Health Check
```basic
' Check health of each collection
stats_json = KB STATISTICS
stats = PARSE_JSON(stats_json)
unhealthy_collections = []
FOR EACH collection IN stats.collections
IF collection.status <> "green" THEN
unhealthy_collections = unhealthy_collections + [collection.name]
PRINT "Warning: Collection " + collection.name + " status is " + collection.status
END IF
END FOR
IF LEN(unhealthy_collections) > 0 THEN
TALK "Found " + LEN(unhealthy_collections) + " collections needing attention"
ELSE
TALK "All " + stats.total_collections + " collections are healthy"
END IF
```
### Scheduled Statistics Report
```basic
' Weekly KB statistics email (run via SET SCHEDULE)
stats_json = KB STATISTICS
stats = PARSE_JSON(stats_json)
' Calculate week-over-week growth
weekly_growth = stats.documents_added_last_week
monthly_growth = stats.documents_added_last_month
avg_weekly = monthly_growth / 4
body = "Weekly Knowledge Base Statistics\n\n"
body = body + "Total Documents: " + FORMAT(stats.total_documents, "#,##0") + "\n"
body = body + "Documents Added This Week: " + weekly_growth + "\n"
body = body + "4-Week Average: " + FORMAT(avg_weekly, "#,##0.0") + "\n"
body = body + "Storage Used: " + FORMAT(stats.total_disk_size_mb, "#,##0.00") + " MB\n"
body = body + "\nCollections:\n"
FOR EACH coll IN stats.collections
body = body + " - " + coll.name + ": " + FORMAT(coll.points_count, "#,##0") + " docs\n"
END FOR
SEND_MAIL admin_email, "Weekly KB Report - " + FORMAT(NOW(), "YYYY-MM-DD"), body, []
```
### Usage Analytics Integration
```basic
' Log KB stats to analytics system
stats_json = KB STATISTICS
stats = PARSE_JSON(stats_json)
' Store metrics for trending
metrics = #{
"timestamp": FORMAT(NOW(), "YYYY-MM-DDTHH:mm:ss"),
"bot_id": bot_id,
"total_docs": stats.total_documents,
"total_vectors": stats.total_vectors,
"disk_mb": stats.total_disk_size_mb,
"ram_mb": stats.total_ram_size_mb,
"collections": stats.total_collections
}
INSERT "kb_metrics", metrics
PRINT "KB metrics logged at " + metrics.timestamp
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
stats_json = KB STATISTICS
IF ERROR THEN
PRINT "Failed to get KB statistics: " + ERROR_MESSAGE
TALK "Sorry, I couldn't retrieve knowledge base statistics right now."
ELSE
IF stats_json = "" THEN
TALK "No knowledge base data available yet."
ELSE
stats = PARSE_JSON(stats_json)
TALK "KB contains " + stats.total_documents + " documents"
END IF
END IF
```
---
## Related Keywords
- [KB COLLECTION STATS](keyword-kb-collection-stats.md) — Get stats for a specific collection
- [KB DOCUMENTS COUNT](keyword-kb-documents-count.md) — Get total document count
- [KB DOCUMENTS ADDED SINCE](keyword-kb-documents-added-since.md) — Count recently added documents
- [KB LIST COLLECTIONS](keyword-kb-list-collections.md) — List all KB collections
- [KB STORAGE SIZE](keyword-kb-storage-size.md) — Get storage usage in MB
- [CLEAR KB](keyword-clear-kb.md) — Clear knowledge base content
- [USE KB](keyword-use-kb.md) — Enable knowledge base for queries
---
## Configuration
No specific configuration required. The keyword uses the Qdrant connection configured at the system level.
Ensure Qdrant is running and accessible:
```csv
name,value
qdrant-url,https://localhost:6334
```
---
## Implementation Notes
- Implemented in Rust under `src/basic/keywords/kb_statistics.rs`
- Queries Qdrant REST API for collection statistics
- Filters collections by bot ID prefix (`kb_{bot_id}`)
- Document counts from both Qdrant and PostgreSQL
- Returns JSON string for flexible parsing
- May take 1-2 seconds for large knowledge bases
---
## Summary
`KB STATISTICS` provides comprehensive metrics about the bot's knowledge base, enabling administrators to monitor health, track growth, and plan capacity. Use it for dashboards, alerts, and reporting. For simpler queries, use the specialized keywords like `KB DOCUMENTS COUNT` or `KB STORAGE SIZE`.

View file

@ -1 +1,225 @@
# KB STORAGE SIZE
The `KB STORAGE SIZE` keyword returns the total disk storage used by the bot's knowledge base in megabytes.
---
## Syntax
```basic
size_mb = KB STORAGE SIZE
```
---
## Parameters
None. Returns the storage size for the current bot's knowledge base.
---
## Description
`KB STORAGE SIZE` queries the Qdrant vector database to calculate the total disk storage consumed by all of the bot's knowledge base collections. This is useful for monitoring storage usage, capacity planning, and cost management.
Use cases include:
- Storage monitoring and alerts
- Capacity planning
- Cost tracking for vector storage
- Admin dashboards
- Cleanup decisions
---
## Return Value
Returns a floating-point number representing storage size in megabytes (MB).
---
## Examples
### Basic Storage Check
```basic
' Get current KB storage usage
storage_mb = KB STORAGE SIZE
TALK "Knowledge base is using " + FORMAT(storage_mb, "#,##0.00") + " MB of storage"
```
### Storage Threshold Alert
```basic
' Alert if storage exceeds threshold
storage_mb = KB STORAGE SIZE
max_storage_mb = 1000 ' 1 GB limit
IF storage_mb > max_storage_mb THEN
SEND_MAIL admin_email,
"KB Storage Alert",
"Knowledge base storage (" + FORMAT(storage_mb, "#,##0") + " MB) has exceeded the " + max_storage_mb + " MB threshold.",
[]
TALK "Storage alert sent to administrator"
ELSE
remaining = max_storage_mb - storage_mb
TALK "Storage OK: " + FORMAT(storage_mb, "#,##0") + " MB used, " + FORMAT(remaining, "#,##0") + " MB remaining"
END IF
```
### Storage Tiers Display
```basic
' Display storage status with tier indicators
storage_mb = KB STORAGE SIZE
IF storage_mb < 100 THEN
tier = "🟢 Light"
ELSE IF storage_mb < 500 THEN
tier = "🟡 Moderate"
ELSE IF storage_mb < 1000 THEN
tier = "🟠 Heavy"
ELSE
tier = "🔴 Critical"
END IF
TALK "Storage Status: " + tier
TALK "Current usage: " + FORMAT(storage_mb, "#,##0.00") + " MB"
```
### Cost Estimation
```basic
' Estimate storage costs (example pricing)
storage_mb = KB STORAGE SIZE
storage_gb = storage_mb / 1024
cost_per_gb = 0.25 ' Example: $0.25 per GB per month
monthly_cost = storage_gb * cost_per_gb
TALK "Current storage: " + FORMAT(storage_gb, "#,##0.00") + " GB"
TALK "Estimated monthly cost: $" + FORMAT(monthly_cost, "#,##0.00")
```
### Storage Growth Tracking
```basic
' Log storage for trend analysis
storage_mb = KB STORAGE SIZE
doc_count = KB DOCUMENTS COUNT
' Calculate average size per document
IF doc_count > 0 THEN
avg_size_kb = (storage_mb * 1024) / doc_count
TALK "Average document size: " + FORMAT(avg_size_kb, "#,##0.00") + " KB"
END IF
' Store for trending
INSERT "storage_metrics", #{
"timestamp": NOW(),
"storage_mb": storage_mb,
"doc_count": doc_count,
"avg_size_kb": avg_size_kb
}
```
### Comprehensive Storage Report
```basic
' Generate storage report
storage_mb = KB STORAGE SIZE
doc_count = KB DOCUMENTS COUNT
recent_docs = KB DOCUMENTS ADDED SINCE 30
' Calculate metrics
storage_gb = storage_mb / 1024
avg_doc_kb = IF(doc_count > 0, (storage_mb * 1024) / doc_count, 0)
report = "## KB Storage Report\n\n"
report = report + "**Date:** " + FORMAT(NOW(), "YYYY-MM-DD") + "\n\n"
report = report + "### Storage Metrics\n"
report = report + "- Total Storage: " + FORMAT(storage_mb, "#,##0.00") + " MB"
report = report + " (" + FORMAT(storage_gb, "#,##0.00") + " GB)\n"
report = report + "- Total Documents: " + FORMAT(doc_count, "#,##0") + "\n"
report = report + "- Avg Size per Doc: " + FORMAT(avg_doc_kb, "#,##0.00") + " KB\n"
report = report + "- Docs Added (30 days): " + recent_docs + "\n"
TALK report
```
### Cleanup Decision Helper
```basic
' Help decide if cleanup is needed
storage_mb = KB STORAGE SIZE
max_storage = 2000 ' 2 GB limit
usage_pct = (storage_mb / max_storage) * 100
IF usage_pct > 80 THEN
TALK "⚠️ Storage at " + FORMAT(usage_pct, "#0.0") + "% capacity"
TALK "Consider cleaning up old or unused documents"
TALK "Use CLEAR KB to remove content if needed"
ELSE IF usage_pct > 60 THEN
TALK "📊 Storage at " + FORMAT(usage_pct, "#0.0") + "% capacity"
TALK "Storage is healthy but monitor growth"
ELSE
TALK "✅ Storage at " + FORMAT(usage_pct, "#0.0") + "% capacity"
TALK "Plenty of room for more documents"
END IF
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
storage_mb = KB STORAGE SIZE
IF ERROR THEN
PRINT "Error getting storage size: " + ERROR_MESSAGE
storage_mb = 0.0
END IF
IF storage_mb > 0 THEN
TALK "Storage usage: " + FORMAT(storage_mb, "#,##0.00") + " MB"
ELSE
TALK "Unable to determine storage usage"
END IF
```
---
## Related Keywords
- [KB STATISTICS](keyword-kb-statistics.md) — Comprehensive KB statistics including storage
- [KB DOCUMENTS COUNT](keyword-kb-documents-count.md) — Total document count
- [KB DOCUMENTS ADDED SINCE](keyword-kb-documents-added-since.md) — Recently added documents
- [KB COLLECTION STATS](keyword-kb-collection-stats.md) — Per-collection statistics
- [KB LIST COLLECTIONS](keyword-kb-list-collections.md) — List all collections
- [CLEAR KB](keyword-clear-kb.md) — Clear knowledge base content
---
## Configuration
No specific configuration required. Uses the Qdrant connection configured at the system level.
---
## Implementation Notes
- Implemented in Rust under `src/basic/keywords/kb_statistics.rs`
- Queries Qdrant REST API for collection sizes
- Aggregates disk usage across all bot collections
- Returns value in megabytes (MB) as float
- Returns 0.0 on error (does not throw)
- May take 1-2 seconds for large knowledge bases
---
## Summary
`KB STORAGE SIZE` provides a quick way to check how much disk storage the knowledge base is consuming. Use it for monitoring, capacity planning, cost estimation, and cleanup decisions. For more detailed storage breakdown by collection, use `KB STATISTICS` instead.

View file

@ -1 +1,256 @@
# LIST
The `LIST` keyword retrieves a directory listing from the bot's drive storage, returning information about files and subdirectories.
---
## Syntax
```basic
files = LIST "path/"
files = LIST "path/" FILTER "*.pdf"
files = LIST "path/" RECURSIVE
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `path` | String | Directory path to list (must end with `/`) |
| `FILTER` | String | Optional glob pattern to filter results |
| `RECURSIVE` | Flag | Include files in subdirectories |
---
## Description
`LIST` returns an array of file and directory information from the specified path in the bot's storage. Each item in the result includes metadata such as name, size, type, and modification date.
Use cases include:
- Browsing user uploads
- Finding files matching patterns
- Checking if files exist
- Building file inventories
- Processing batches of files
---
## Examples
### Basic Directory Listing
```basic
' List all files in a directory
files = LIST "documents/"
FOR EACH file IN files
TALK file.name + " (" + file.size + " bytes)"
NEXT
```
### Filter by Extension
```basic
' List only PDF files
pdfs = LIST "documents/" FILTER "*.pdf"
TALK "Found " + LEN(pdfs) + " PDF files"
FOR EACH pdf IN pdfs
TALK "- " + pdf.name
NEXT
```
### Recursive Listing
```basic
' List all files including subdirectories
all_files = LIST "uploads/" RECURSIVE
TALK "Total files: " + LEN(all_files)
```
### Check File Exists
```basic
' Check if a specific file exists
files = LIST "reports/"
found = false
FOR EACH file IN files
IF file.name = "monthly-report.pdf" THEN
found = true
EXIT FOR
END IF
NEXT
IF found THEN
TALK "Report found!"
ELSE
TALK "Report not found. Would you like me to generate one?"
END IF
```
### Find Recent Files
```basic
' List files modified in last 24 hours
files = LIST "inbox/"
yesterday = DATEADD(NOW(), -1, "day")
recent = FILTER files WHERE modified > yesterday
TALK "You have " + LEN(recent) + " new files since yesterday"
```
### Calculate Folder Size
```basic
' Sum up total size of files in folder
files = LIST "backups/" RECURSIVE
total_size = 0
FOR EACH file IN files
total_size = total_size + file.size
NEXT
size_mb = total_size / 1048576
TALK "Backup folder size: " + FORMAT(size_mb, "#,##0.00") + " MB"
```
### Process All Files of Type
```basic
' Process all CSV files in a folder
csv_files = LIST "imports/" FILTER "*.csv"
FOR EACH csv_file IN csv_files
data = READ "imports/" + csv_file.name AS TABLE
' Process each file...
MOVE "imports/" + csv_file.name TO "processed/" + csv_file.name
NEXT
TALK "Processed " + LEN(csv_files) + " CSV files"
```
---
## Return Value
Returns an array of file objects. Each object contains:
| Property | Type | Description |
|----------|------|-------------|
| `name` | String | File or directory name |
| `path` | String | Full path relative to storage root |
| `size` | Number | File size in bytes (0 for directories) |
| `type` | String | `file` or `directory` |
| `mime_type` | String | MIME type (e.g., `application/pdf`) |
| `modified` | DateTime | Last modification timestamp |
| `created` | DateTime | Creation timestamp |
### Example Result
```basic
files = LIST "documents/"
' files[0] might be:
' {
' name: "report.pdf",
' path: "documents/report.pdf",
' size: 245678,
' type: "file",
' mime_type: "application/pdf",
' modified: "2025-01-15T10:30:00Z",
' created: "2025-01-10T09:00:00Z"
' }
```
---
## Filter Patterns
| Pattern | Matches |
|---------|---------|
| `*` | All files |
| `*.pdf` | All PDF files |
| `*.csv` | All CSV files |
| `report*` | Files starting with "report" |
| `*2025*` | Files containing "2025" |
| `*.jpg,*.png` | Multiple extensions |
```basic
' Multiple extensions
images = LIST "photos/" FILTER "*.jpg,*.png,*.gif"
' Wildcard in name
reports = LIST "exports/" FILTER "sales-*"
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
files = LIST "nonexistent-folder/"
IF ERROR THEN
PRINT "List failed: " + ERROR_MESSAGE
TALK "That folder doesn't exist."
ELSE IF LEN(files) = 0 THEN
TALK "The folder is empty."
ELSE
TALK "Found " + LEN(files) + " items"
END IF
```
### Common Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `PATH_NOT_FOUND` | Directory doesn't exist | Check path spelling |
| `NOT_A_DIRECTORY` | Path is a file, not folder | Add trailing `/` |
| `PERMISSION_DENIED` | Access blocked | Check permissions |
---
## Behavior Notes
- **Trailing slash required**: Paths must end with `/` to indicate directory
- **Excludes hidden files**: Files starting with `.` are excluded by default
- **Sorted alphabetically**: Results are sorted by name
- **Non-recursive by default**: Only lists immediate contents unless `RECURSIVE` specified
---
## Configuration
No specific configuration required. Uses bot's standard drive settings from `config.csv`:
```csv
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
```
---
## Related Keywords
- [READ](keyword-read.md) — Read file contents
- [WRITE](keyword-write.md) — Write file contents
- [COPY](keyword-copy.md) — Copy files
- [MOVE](keyword-move.md) — Move or rename files
- [DELETE FILE](keyword-delete-file.md) — Remove files
- [UPLOAD](keyword-upload.md) — Upload files to storage
---
## Summary
`LIST` retrieves directory contents from storage, returning detailed metadata about each file and subdirectory. Use it to browse files, find matching documents, check existence, calculate sizes, and process batches of files. Filter patterns and recursive options help narrow results to exactly what you need.

View file

@ -1 +1,386 @@
# MERGE PDF
The `MERGE PDF` keyword combines multiple PDF files into a single document, enabling bots to consolidate reports, compile documents, and create comprehensive file packages.
---
## Syntax
```basic
result = MERGE PDF files, "output.pdf"
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `files` | Array/String | Array of PDF file paths or single path |
| `output` | String | Output filename for the merged PDF |
---
## Description
`MERGE PDF` takes multiple PDF files and combines them into a single document in the order specified. This is useful for creating comprehensive reports, combining related documents, or building document packages for clients.
Use cases include:
- Combining invoice and receipt PDFs
- Merging report sections into complete reports
- Creating document packages for clients
- Consolidating scanned documents
- Building compliance document bundles
---
## Examples
### Basic PDF Merge
```basic
' Merge two PDF files
files = ["report-part1.pdf", "report-part2.pdf"]
result = MERGE PDF files, "complete-report.pdf"
TALK "Report merged: " + result.localName
```
### Merge Multiple Documents
```basic
' Merge multiple documents into one package
documents = [
"contracts/agreement.pdf",
"documents/terms.pdf",
"documents/privacy-policy.pdf",
"documents/appendix-a.pdf"
]
result = MERGE PDF documents, "client-package-" + client_id + ".pdf"
TALK "Document package created!"
DOWNLOAD result.url AS "Complete Package.pdf"
```
### Dynamic Document Collection
```basic
' Find and merge all invoices for a month
invoice_files = []
invoices = FIND "invoices" WHERE month = current_month
FOR EACH inv IN invoices
invoice_files = invoice_files + ["invoices/" + inv.filename]
END FOR
result = MERGE PDF invoice_files, "monthly-invoices-" + FORMAT(NOW(), "YYYYMM") + ".pdf"
TALK "Merged " + LEN(invoice_files) + " invoices into one document"
```
### Merge with Generated PDFs
```basic
' Generate PDFs first, then merge them
cover = GENERATE PDF "templates/cover.html", cover_data, "temp/cover.pdf"
body = GENERATE PDF "templates/report.html", report_data, "temp/body.pdf"
appendix = GENERATE PDF "templates/appendix.html", appendix_data, "temp/appendix.pdf"
files = [cover.localName, body.localName, appendix.localName]
result = MERGE PDF files, "reports/full-report-" + report_id + ".pdf"
TALK "Complete report generated with " + LEN(files) + " sections"
```
### Merge and Email
```basic
' Create document package and email to client
documents = [
"proposals/proposal-" + deal_id + ".pdf",
"documents/service-agreement.pdf",
"documents/pricing-schedule.pdf"
]
result = MERGE PDF documents, "packages/" + client_name + "-proposal.pdf"
SEND MAIL client_email,
"Your Proposal Package",
"Please find attached your complete proposal package.",
[result.localName]
TALK "Proposal package sent to " + client_email
```
---
## Return Value
Returns an object with merge details:
| Property | Description |
|----------|-------------|
| `result.url` | Full URL to the merged PDF (S3/MinIO path) |
| `result.localName` | Local filename of the merged PDF |
---
## Common Use Cases
### Monthly Report Compilation
```basic
' Compile all weekly reports into monthly report
weekly_reports = [
"reports/week1.pdf",
"reports/week2.pdf",
"reports/week3.pdf",
"reports/week4.pdf"
]
' Generate cover page
cover = GENERATE PDF "templates/monthly-cover.html", #{
"month": FORMAT(NOW(), "MMMM YYYY"),
"generated": FORMAT(NOW(), "YYYY-MM-DD")
}, "temp/cover.pdf"
' Merge cover with weekly reports
all_files = [cover.localName] + weekly_reports
result = MERGE PDF all_files, "reports/monthly-" + FORMAT(NOW(), "YYYYMM") + ".pdf"
TALK "Monthly report compiled!"
```
### Client Onboarding Package
```basic
' Create onboarding document package for new client
package_files = [
"templates/welcome-letter.pdf",
"contracts/service-agreement-" + contract_id + ".pdf",
"documents/user-guide.pdf",
"documents/faq.pdf",
"documents/support-contacts.pdf"
]
result = MERGE PDF package_files, "onboarding/" + client_id + "-welcome-package.pdf"
SEND MAIL client_email,
"Welcome to Our Service!",
"Please find your complete onboarding package attached.",
[result.localName]
TALK "Onboarding package sent to " + client_name
```
### Compliance Document Bundle
```basic
' Bundle all compliance documents for audit
compliance_docs = FIND "compliance_documents" WHERE year = audit_year
file_list = []
FOR EACH doc IN compliance_docs
file_list = file_list + [doc.file_path]
END FOR
' Add table of contents
toc = GENERATE PDF "templates/compliance-toc.html", #{
"documents": compliance_docs,
"audit_year": audit_year
}, "temp/toc.pdf"
all_files = [toc.localName] + file_list
result = MERGE PDF all_files, "audits/compliance-bundle-" + audit_year + ".pdf"
TALK "Compliance bundle ready with " + LEN(compliance_docs) + " documents"
```
### Invoice Bundle for Accounting
```basic
' Create quarterly invoice bundle
quarter_start = DATEADD(NOW(), -3, "month")
invoices = FIND "generated_invoices" WHERE created_at >= quarter_start
invoice_files = []
FOR EACH inv IN invoices
invoice_files = invoice_files + ["invoices/" + inv.pdf_filename]
END FOR
IF LEN(invoice_files) > 0 THEN
result = MERGE PDF invoice_files, "accounting/Q" + quarter + "-invoices.pdf"
TALK "Bundled " + LEN(invoice_files) + " invoices for Q" + quarter
ELSE
TALK "No invoices found for this quarter"
END IF
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
files = ["doc1.pdf", "doc2.pdf", "doc3.pdf"]
result = MERGE PDF files, "merged.pdf"
IF ERROR THEN
error_msg = ERROR_MESSAGE
IF INSTR(error_msg, "not found") > 0 THEN
TALK "One or more PDF files could not be found."
ELSE IF INSTR(error_msg, "invalid") > 0 THEN
TALK "One of the files is not a valid PDF."
ELSE IF INSTR(error_msg, "storage") > 0 THEN
TALK "Not enough storage space for the merged file."
ELSE
TALK "Merge failed: " + error_msg
END IF
ELSE
TALK "PDFs merged successfully!"
END IF
```
### Validating Files Before Merge
```basic
' Check files exist before attempting merge
files_to_merge = ["report1.pdf", "report2.pdf", "report3.pdf"]
valid_files = []
FOR EACH f IN files_to_merge
file_info = LIST f
IF file_info THEN
valid_files = valid_files + [f]
ELSE
PRINT "Warning: " + f + " not found, skipping"
END IF
END FOR
IF LEN(valid_files) > 0 THEN
result = MERGE PDF valid_files, "merged-output.pdf"
TALK "Merged " + LEN(valid_files) + " of " + LEN(files_to_merge) + " files"
ELSE
TALK "No valid PDF files found to merge"
END IF
```
### Common Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `FILE_NOT_FOUND` | Source PDF doesn't exist | Verify file paths |
| `INVALID_PDF` | File is not a valid PDF | Check file format |
| `EMPTY_INPUT` | No files provided | Ensure array has files |
| `STORAGE_FULL` | Insufficient disk space | Clean up storage |
| `PERMISSION_DENIED` | Cannot read source file | Check file permissions |
---
## Best Practices
### File Organization
```basic
' Organize files in logical order before merge
sections = [
"01-cover.pdf",
"02-executive-summary.pdf",
"03-introduction.pdf",
"04-analysis.pdf",
"05-recommendations.pdf",
"06-appendices.pdf"
]
result = MERGE_PDF sections, "final-report.pdf"
```
### Temporary File Cleanup
```basic
' Clean up temporary files after merge
temp_files = []
' Generate temporary PDFs
FOR i = 1 TO 5
temp_file = "temp/section-" + i + ".pdf"
GENERATE PDF "templates/section.html", section_data[i], temp_file
temp_files = temp_files + [temp_file]
END FOR
' Merge all sections
result = MERGE PDF temp_files, "final-document.pdf"
' Clean up temp files
FOR EACH tf IN temp_files
DELETE tf
END FOR
TALK "Document created and temp files cleaned up"
```
### Large Document Sets
```basic
' For very large document sets, batch if needed
all_files = get_all_pdf_files() ' Assume this returns many files
IF LEN(all_files) > 100 THEN
' Process in batches
batch_size = 50
batch_outputs = []
FOR batch_num = 0 TO (LEN(all_files) / batch_size)
start_idx = batch_num * batch_size
batch_files = SLICE(all_files, start_idx, start_idx + batch_size)
batch_output = "temp/batch-" + batch_num + ".pdf"
MERGE PDF batch_files, batch_output
batch_outputs = batch_outputs + [batch_output]
END FOR
' Final merge of batches
result = MERGE PDF batch_outputs, "complete-archive.pdf"
ELSE
result = MERGE PDF all_files, "complete-archive.pdf"
END IF
```
---
## Configuration
No specific configuration required. Uses the bot's standard drive storage settings from `config.csv`.
Output files are stored in the bot's `.gbdrive` storage location.
---
## Implementation Notes
- Implemented in Rust under `src/basic/keywords/file_operations.rs`
- Maintains PDF metadata and bookmarks where possible
- Preserves page sizes and orientations
- Handles password-protected PDFs (if password provided)
- Maximum combined size: 500 MB
- Processing timeout: 120 seconds
---
## Related Keywords
- [GENERATE PDF](keyword-generate-pdf.md) — Create PDFs from templates
- [READ](keyword-read.md) — Read file contents
- [DOWNLOAD](keyword-download.md) — Send files to users
- [COPY](keyword-copy.md) — Copy files
- [DELETE](keyword-delete-file.md) — Remove files
- [LIST](keyword-list.md) — List files in directory
---
## Summary
`MERGE PDF` combines multiple PDF files into a single document, making it easy to create comprehensive document packages, compile reports, and bundle related files. Use it with `GENERATE PDF` to create multi-section reports or with existing files to build client packages. The keyword handles the complexity of PDF merging while providing a simple array-based interface.

View file

@ -1 +1,207 @@
# MOVE
The `MOVE` keyword relocates or renames files within the bot's drive storage.
---
## Syntax
```basic
MOVE "source" TO "destination"
result = MOVE "source" TO "destination"
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `source` | String | Current path of the file |
| `destination` | String | New path for the file |
---
## Description
`MOVE` transfers a file from one location to another within the bot's storage. The original file is removed after the move completes. This keyword can also be used to rename files by moving them to a new name in the same directory.
Use cases include:
- Organizing files into folders
- Renaming files
- Archiving processed files
- Moving uploads to permanent storage
---
## Examples
### Basic File Move
```basic
' Move a file to a different folder
MOVE "inbox/document.pdf" TO "processed/document.pdf"
TALK "File moved to processed folder"
```
### Rename a File
```basic
' Rename by moving to same directory with new name
MOVE "reports/report.pdf" TO "reports/sales-report-2025.pdf"
TALK "File renamed successfully"
```
### Move After Processing
```basic
' Process file then move to archive
content = READ "incoming/data.csv"
' ... process the data ...
MOVE "incoming/data.csv" TO "archive/data-" + FORMAT(NOW(), "YYYYMMDD") + ".csv"
TALK "Data processed and archived"
```
### Organize User Uploads
```basic
' Move uploaded file to user's folder
HEAR uploaded_file
temp_path = UPLOAD uploaded_file TO "temp"
permanent_path = "users/" + user.id + "/documents/" + uploaded_file.name
MOVE temp_path.path TO permanent_path
TALK "File saved to your documents"
```
### Move with Category
```basic
' Organize files by type
file_type = GET_FILE_TYPE(filename)
SWITCH file_type
CASE "pdf"
MOVE "uploads/" + filename TO "documents/" + filename
CASE "jpg", "png"
MOVE "uploads/" + filename TO "images/" + filename
CASE "csv", "xlsx"
MOVE "uploads/" + filename TO "data/" + filename
CASE ELSE
MOVE "uploads/" + filename TO "other/" + filename
END SWITCH
TALK "File organized into " + file_type + " folder"
```
### Batch Move
```basic
' Move all files from one folder to another
files = LIST "temp/"
FOR EACH file IN files
MOVE "temp/" + file.name TO "permanent/" + file.name
NEXT
TALK "Moved " + LEN(files) + " files"
```
---
## Return Value
Returns an object with move details:
| Property | Description |
|----------|-------------|
| `result.source` | Original file path |
| `result.destination` | New file path |
| `result.size` | File size in bytes |
| `result.moved_at` | Timestamp of move operation |
---
## Error Handling
```basic
ON ERROR RESUME NEXT
MOVE "documents/report.pdf" TO "archive/report.pdf"
IF ERROR THEN
PRINT "Move failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't move that file."
ELSE
TALK "File moved successfully!"
END IF
```
### Common Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `FILE_NOT_FOUND` | Source doesn't exist | Verify source path |
| `PERMISSION_DENIED` | Access blocked | Check permissions |
| `DESTINATION_EXISTS` | Target file exists | Delete target first or use different name |
| `SAME_PATH` | Source equals destination | Use different destination |
---
## Move vs Copy
| Operation | Source After | Use When |
|-----------|--------------|----------|
| `MOVE` | Deleted | Relocating or renaming |
| `COPY` | Preserved | Creating duplicates |
```basic
' MOVE: Original is gone
MOVE "a/file.txt" TO "b/file.txt"
' Only exists at b/file.txt now
' COPY: Original remains
COPY "a/file.txt" TO "b/file.txt"
' Exists at both locations
```
---
## Behavior Notes
- **Atomic operation**: Move completes fully or not at all
- **Creates directories**: Parent folders created automatically
- **Overwrites by default**: Destination replaced if exists
- **Cross-folder**: Can move between any directories in storage
---
## Configuration
No specific configuration required. Uses bot's standard drive settings from `config.csv`:
```csv
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
```
---
## Related Keywords
- [COPY](keyword-copy.md) — Duplicate files
- [DELETE FILE](keyword-delete-file.md) — Remove files
- [READ](keyword-read.md) — Read file contents
- [WRITE](keyword-write.md) — Write file contents
- [LIST](keyword-list.md) — List directory contents
- [UPLOAD](keyword-upload.md) — Upload files to storage
---
## Summary
`MOVE` relocates or renames files within storage. The original file is removed after the move. Use it to organize files, rename documents, archive processed data, and manage user uploads. Destination directories are created automatically.

View file

@ -1 +1,290 @@
# PATCH
The `PATCH` keyword sends HTTP PATCH requests to external APIs, used for partial updates to existing resources.
---
## Syntax
```basic
result = PATCH url, data
PATCH url WITH field1 = value1, field2 = value2
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `url` | String | The target URL endpoint |
| `data` | String | JSON string for request body |
| `WITH` | Clause | Field-value pairs for the request body |
---
## Description
`PATCH` sends partial data to a specified URL using the HTTP PATCH method. In REST APIs, PATCH is used to:
- Update specific fields without affecting others
- Make incremental changes to resources
- Modify only what has changed
Unlike `PUT` which replaces the entire resource, `PATCH` only updates the fields you specify.
---
## Examples
### Basic PATCH Request
```basic
' Update only the user's email
result = PATCH "https://api.example.com/users/123" WITH
email = "new.email@example.com"
IF result.success THEN
TALK "Email updated successfully!"
ELSE
TALK "Update failed: " + result.error
END IF
```
### Update Status Only
```basic
' Change order status without modifying other fields
PATCH "https://api.orders.com/orders/" + order_id WITH
status = "shipped"
TALK "Order status updated to shipped"
```
### Update Multiple Fields
```basic
' Update several fields at once
result = PATCH "https://api.example.com/products/SKU-001" WITH
price = 39.99,
stock = 150,
on_sale = true
TALK "Product updated: price, stock, and sale status"
```
### With Authentication
```basic
' Set authorization header first
SET HEADER "Authorization", "Bearer " + api_token
SET HEADER "Content-Type", "application/json"
' Make authenticated PATCH request
result = PATCH "https://api.service.com/resources/456" WITH
title = "Updated Title"
' Clear headers after request
SET HEADER "Authorization", ""
```
### Using JSON String
```basic
' PATCH with JSON string body
json_body = '{"status": "archived", "archived_at": "2025-01-15T10:00:00Z"}'
result = PATCH "https://api.example.com/documents/789", json_body
TALK "Document archived!"
```
---
## PATCH vs PUT
| Aspect | PATCH | PUT |
|--------|-------|-----|
| **Purpose** | Update specific fields | Replace entire resource |
| **Body Contains** | Only changed fields | All resource fields |
| **Missing Fields** | Unchanged | May be set to null |
| **Use When** | Changing 1-2 fields | Replacing whole object |
```basic
' PATCH - Only update what changed
result = PATCH "https://api.example.com/users/123" WITH
phone = "+1-555-0200"
' Only phone is updated, name/email/etc unchanged
' PUT - Must include all fields
result = PUT "https://api.example.com/users/123" WITH
name = "John Doe",
email = "john@example.com",
phone = "+1-555-0200",
status = "active"
' All fields required, replaces entire user
```
---
## Common Use Cases
### Toggle Feature Flag
```basic
' Enable a single feature
PATCH "https://api.example.com/users/" + user.id + "/settings" WITH
dark_mode = true
TALK "Dark mode enabled!"
```
### Update Profile Field
```basic
' User wants to change their display name
TALK "What would you like your new display name to be?"
HEAR new_name
result = PATCH "https://api.example.com/users/" + user.id WITH
display_name = new_name
TALK "Your display name is now: " + new_name
```
### Mark as Read
```basic
' Mark notification as read
PATCH "https://api.example.com/notifications/" + notification_id WITH
read = true,
read_at = FORMAT(NOW(), "ISO8601")
TALK "Notification marked as read"
```
### Update Progress
```basic
' Update task completion percentage
PATCH "https://api.tasks.com/tasks/" + task_id WITH
progress = 75,
last_updated = FORMAT(NOW(), "ISO8601")
TALK "Task progress updated to 75%"
```
### Increment Counter
```basic
' Update view count (if API supports increment)
result = PATCH "https://api.content.com/articles/" + article_id WITH
views = current_views + 1
' Or if API has increment syntax
PATCH "https://api.content.com/articles/" + article_id WITH
increment_views = 1
```
### Soft Delete
```basic
' Mark record as deleted without removing it
PATCH "https://api.example.com/records/" + record_id WITH
deleted = true,
deleted_at = FORMAT(NOW(), "ISO8601"),
deleted_by = user.id
TALK "Record archived (can be restored if needed)"
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
result = PATCH "https://api.example.com/resource/123" WITH
status = "updated"
IF ERROR THEN
PRINT "PATCH request failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't update that information."
ELSE IF result.error THEN
TALK "Update failed: " + result.error.message
ELSE
TALK "Update successful!"
END IF
```
### Common HTTP Status Codes
| Status | Meaning | Action |
|--------|---------|--------|
| 200 | Success, updated resource returned | Process response |
| 204 | Success, no content returned | Update complete |
| 400 | Bad request | Check field names/values |
| 401 | Unauthorized | Check authentication |
| 404 | Resource not found | Verify URL/ID |
| 409 | Conflict | Resource was modified by another |
| 422 | Validation error | Check field constraints |
---
## Best Practices
1. **Update only changed fields** — Don't include unchanged data
2. **Check response** — Verify the update was applied correctly
3. **Handle conflicts** — Be prepared for concurrent modification errors
4. **Use optimistic locking** — Include version/etag if API supports it
```basic
' With version checking (if API supports it)
SET HEADER "If-Match", current_etag
result = PATCH "https://api.example.com/resource/123" WITH
field = new_value
IF result.status = 409 THEN
TALK "Someone else modified this. Please refresh and try again."
END IF
```
---
## Configuration
Configure HTTP settings in `config.csv`:
```csv
name,value
http-timeout,30
http-retry-count,3
http-retry-delay,1000
```
---
## Implementation Notes
- Implemented in Rust under `src/web_automation/http.rs`
- Automatically serializes WITH clause to JSON
- Supports custom headers via SET HEADER
- Returns parsed JSON response
- Content-Type defaults to `application/json`
---
## Related Keywords
- [GET](keyword-get.md) — Retrieve data from URLs
- [POST](keyword-post.md) — Create new resources
- [PUT](keyword-put.md) — Replace entire resources
- [DELETE HTTP](keyword-delete-http.md) — Remove resources
- [SET HEADER](keyword-set-header.md) — Set request headers
---
## Summary
`PATCH` updates specific fields of a resource via HTTP PATCH requests. Use it when you only need to change one or a few fields without affecting the rest of the resource. This is more efficient than PUT and reduces the risk of accidentally overwriting data. Always specify only the fields that need to change.

View file

@ -1 +1,287 @@
# POST
The `POST` keyword sends HTTP POST requests to external APIs and web services, enabling bots to create resources, submit data, and integrate with third-party systems.
---
## Syntax
```basic
result = POST url, data
result = POST url, data, content_type
POST url, param1, param2, param3, ...
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `url` | String | The target URL endpoint |
| `data` | String/Object | Request body (JSON string or object) |
| `content_type` | String | Optional content type (default: `application/json`) |
| `param1, param2, ...` | Any | Positional parameters for form-style requests |
---
## Description
`POST` sends data to a specified URL using the HTTP POST method. This is the primary keyword for:
- Creating new resources in REST APIs
- Submitting form data
- Triggering webhooks
- Sending notifications to external services
- Integrating with third-party platforms
The response is returned as a parsed JSON object when possible, or as a string for other content types.
---
## Examples
### Basic JSON POST
```basic
' Create a new user via API
data = '{"name": "John Doe", "email": "john@example.com"}'
result = POST "https://api.example.com/users", data
TALK "User created with ID: " + result.id
```
### Using WITH Syntax
```basic
' Create order using WITH keyword
result = POST "https://api.store.com/orders" WITH
customer_id = "cust-123",
items = ["item-1", "item-2"],
total = 99.99
TALK "Order " + result.order_id + " placed successfully!"
```
### Form-Style Parameters
```basic
' Submit with positional parameters
POST "https://warehouse.internal/api/orders", order_id, items, shipping_address, "express"
```
### With Custom Headers
```basic
' Set authorization header first
SET HEADER "Authorization", "Bearer " + api_token
SET HEADER "X-Request-ID", request_id
result = POST "https://api.service.com/data", payload
' Clear headers after request
SET HEADER "Authorization", ""
```
### Webhook Integration
```basic
' Send Slack notification
POST "https://hooks.slack.com/services/xxx/yyy/zzz" WITH
channel = "#alerts",
text = "New order received: " + order_id,
username = "Order Bot"
```
### Creating Records
```basic
' Create a support ticket
result = POST "https://helpdesk.example.com/api/tickets" WITH
title = "Customer inquiry",
description = user_message,
priority = "medium",
customer_email = customer.email
IF result.id THEN
TALK "Ticket #" + result.id + " created. Our team will respond within 24 hours."
ELSE
TALK "Sorry, I couldn't create the ticket. Please try again."
END IF
```
---
## Handling Responses
### Check Response Status
```basic
result = POST "https://api.example.com/resource", data
IF result.error THEN
TALK "Error: " + result.error.message
ELSE IF result.id THEN
TALK "Success! Created resource: " + result.id
END IF
```
### Parse Nested Response
```basic
result = POST "https://api.payment.com/charge", payment_data
IF result.status = "succeeded" THEN
TALK "Payment of $" + result.amount + " processed!"
TALK "Transaction ID: " + result.transaction_id
ELSE
TALK "Payment failed: " + result.failure_reason
END IF
```
---
## Common Use Cases
### Send Email via API
```basic
POST "https://api.mailservice.com/send" WITH
to = customer_email,
subject = "Order Confirmation",
body = "Thank you for your order #" + order_id
```
### Create Calendar Event
```basic
result = POST "https://calendar.api.com/events" WITH
title = "Meeting with " + contact_name,
start = meeting_time,
duration = 60,
attendees = [contact_email]
TALK "Meeting scheduled! Calendar invite sent."
```
### Log Analytics Event
```basic
' Track user action
POST "https://analytics.example.com/track" WITH
event = "purchase_completed",
user_id = user.id,
order_value = total,
items_count = LEN(cart)
```
### CRM Integration
```basic
' Create lead in CRM
result = POST "https://crm.example.com/api/leads" WITH
first_name = first_name,
last_name = last_name,
email = email,
phone = phone,
source = "chatbot",
notes = "Initial inquiry: " + user_query
SET USER MEMORY "crm_lead_id", result.id
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
result = POST "https://api.example.com/resource", data
IF ERROR THEN
PRINT "POST failed: " + ERROR_MESSAGE
' Try backup endpoint
result = POST "https://backup-api.example.com/resource", data
END IF
IF result.error THEN
TALK "The service returned an error. Please try again later."
ELSE
TALK "Request successful!"
END IF
```
---
## Content Types
| Content Type | Use Case |
|--------------|----------|
| `application/json` | Default, most REST APIs |
| `application/x-www-form-urlencoded` | HTML form submissions |
| `multipart/form-data` | File uploads (use UPLOAD instead) |
| `text/xml` | SOAP services (use SOAP instead) |
```basic
' Explicit content type
result = POST "https://legacy.api.com/submit", form_data, "application/x-www-form-urlencoded"
```
---
## Configuration
### Timeouts
Configure request timeout in `config.csv`:
```csv
name,value
http-timeout,30
http-retry-count,3
http-retry-delay,1000
```
### Base URL
Set a base URL for all HTTP requests:
```csv
name,value
http-base-url,https://api.mycompany.com
```
Then use relative paths:
```basic
result = POST "/users", user_data ' Resolves to https://api.mycompany.com/users
```
---
## Implementation Notes
- Implemented in Rust under `src/web_automation/http.rs`
- Uses `reqwest` library with async runtime
- Automatically serializes objects to JSON
- Handles redirects (up to 10 hops)
- Validates SSL certificates by default
- Supports gzip/deflate response compression
---
## Related Keywords
- [GET](keyword-get.md) — Retrieve data from URLs
- [PUT](keyword-put.md) — Update existing resources
- [PATCH](keyword-patch.md) — Partial resource updates
- [DELETE HTTP](keyword-delete-http.md) — Remove resources
- [SET HEADER](keyword-set-header.md) — Set request headers
- [GRAPHQL](keyword-graphql.md) — GraphQL queries and mutations
---
## Summary
`POST` is essential for integrating bots with external services. Use it to create resources, submit data, trigger webhooks, and connect to any REST API. Combined with `SET HEADER` for authentication, it enables powerful integrations with CRMs, payment systems, notification services, and more.

View file

@ -1 +1,259 @@
# PUT
The `PUT` keyword sends HTTP PUT requests to external APIs, used for replacing or updating entire resources.
---
## Syntax
```basic
result = PUT url, data
PUT url WITH field1 = value1, field2 = value2
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `url` | String | The target URL endpoint |
| `data` | String | JSON string for request body |
| `WITH` | Clause | Field-value pairs for the request body |
---
## Description
`PUT` sends data to a specified URL using the HTTP PUT method. In REST APIs, PUT is used to:
- Replace an entire resource with new data
- Create a resource at a specific URL if it doesn't exist
- Update all fields of an existing resource
Unlike `PATCH` which updates partial data, `PUT` typically replaces the entire resource.
---
## Examples
### Basic PUT Request
```basic
' Update entire user profile
result = PUT "https://api.example.com/users/123" WITH
name = "John Doe",
email = "john.doe@example.com",
phone = "+1-555-0100",
status = "active"
IF result.success THEN
TALK "Profile updated successfully!"
ELSE
TALK "Update failed: " + result.error
END IF
```
### Replace Configuration
```basic
' Replace entire configuration object
result = PUT "https://api.example.com/config/bot-settings" WITH
theme = "dark",
language = "en",
notifications = true,
auto_reply = false
TALK "Configuration saved"
```
### Update Product
```basic
' Replace product details
result = PUT "https://api.store.com/products/SKU-001" WITH
name = "Premium Widget",
price = 49.99,
stock = 100,
category = "electronics",
description = "High-quality widget with premium features"
TALK "Product updated: " + result.name
```
### With Authentication
```basic
' Set authorization header first
SET HEADER "Authorization", "Bearer " + api_token
SET HEADER "Content-Type", "application/json"
' Make authenticated PUT request
result = PUT "https://api.service.com/resources/456" WITH
title = "Updated Title",
content = new_content,
updated_by = user.id
' Clear headers after request
SET HEADER "Authorization", ""
```
### Using JSON String
```basic
' PUT with JSON string body
json_body = '{"name": "Updated Name", "status": "published"}'
result = PUT "https://api.example.com/articles/789", json_body
TALK "Article updated!"
```
---
## PUT vs PATCH vs POST
| Method | Purpose | Body Contains |
|--------|---------|---------------|
| `POST` | Create new resource | New resource data |
| `PUT` | Replace entire resource | Complete resource data |
| `PATCH` | Update partial resource | Only changed fields |
```basic
' POST - Create new
result = POST "https://api.example.com/users" WITH
name = "New User",
email = "new@example.com"
' Creates user, returns new ID
' PUT - Replace entire resource
result = PUT "https://api.example.com/users/123" WITH
name = "Updated Name",
email = "updated@example.com",
phone = "+1-555-0100"
' All fields required, replaces entire user
' PATCH - Update specific fields
result = PATCH "https://api.example.com/users/123" WITH
phone = "+1-555-0200"
' Only phone is updated, other fields unchanged
```
---
## Common Use Cases
### Update User Settings
```basic
' Save all user preferences
result = PUT "https://api.example.com/users/" + user.id + "/settings" WITH
email_notifications = true,
sms_notifications = false,
timezone = "America/New_York",
language = "en"
TALK "Your settings have been saved!"
```
### Replace Document
```basic
' Upload new version of document (replaces existing)
document_content = READ "templates/contract.md"
result = PUT "https://api.docs.com/documents/" + doc_id WITH
title = "Service Agreement v2.0",
content = document_content,
version = "2.0",
last_modified = FORMAT(NOW(), "ISO8601")
TALK "Document replaced with new version"
```
### Update Order Status
```basic
' Replace order with updated status
result = PUT "https://api.orders.com/orders/" + order_id WITH
customer_id = order.customer_id,
items = order.items,
total = order.total,
status = "shipped",
tracking_number = tracking_id,
shipped_at = FORMAT(NOW(), "ISO8601")
TALK "Order marked as shipped!"
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
result = PUT "https://api.example.com/resource/123" WITH
field1 = value1,
field2 = value2
IF ERROR THEN
PRINT "PUT request failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't update that information."
ELSE IF result.error THEN
TALK "Update failed: " + result.error.message
ELSE
TALK "Update successful!"
END IF
```
### Common HTTP Status Codes
| Status | Meaning | Action |
|--------|---------|--------|
| 200 | Success, resource updated | Process response |
| 201 | Created (resource didn't exist) | New resource created |
| 204 | Success, no content returned | Update complete |
| 400 | Bad request | Check request data |
| 401 | Unauthorized | Check authentication |
| 404 | Resource not found | Verify URL/ID |
| 409 | Conflict | Resource was modified |
| 422 | Validation error | Check field values |
---
## Configuration
Configure HTTP settings in `config.csv`:
```csv
name,value
http-timeout,30
http-retry-count,3
http-retry-delay,1000
```
---
## Implementation Notes
- Implemented in Rust under `src/web_automation/http.rs`
- Automatically serializes WITH clause to JSON
- Supports custom headers via SET HEADER
- Returns parsed JSON response
- Handles redirects (up to 10 hops)
---
## Related Keywords
- [GET](keyword-get.md) — Retrieve data from URLs
- [POST](keyword-post.md) — Create new resources
- [PATCH](keyword-patch.md) — Partial resource updates
- [DELETE HTTP](keyword-delete-http.md) — Remove resources
- [SET HEADER](keyword-set-header.md) — Set request headers
---
## Summary
`PUT` replaces entire resources via HTTP PUT requests. Use it when you need to update all fields of a resource or create a resource at a specific URL. For partial updates where you only change specific fields, use `PATCH` instead. Always include all required fields when using PUT, as missing fields may be set to null or cause errors.

View file

@ -1 +1,312 @@
# READ
The `READ` keyword loads content from files stored in the bot's drive storage, enabling bots to access documents, data files, and other stored resources.
---
## Syntax
```basic
content = READ "filename"
content = READ "path/to/filename"
data = READ "filename.csv" AS TABLE
lines = READ "filename.txt" AS LINES
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `filename` | String | Path to the file in the bot's storage |
| `AS TABLE` | Flag | Parse CSV/Excel files as structured data |
| `AS LINES` | Flag | Return content as array of lines |
---
## Description
`READ` retrieves file content from the bot's configured storage (drive bucket). It supports:
- Text files (`.txt`, `.md`, `.json`, `.xml`, `.csv`)
- Documents (`.pdf`, `.docx`) — automatically extracts text
- Spreadsheets (`.xlsx`, `.csv`) — can parse as structured data
- Binary files — returned as base64 encoded string
The file path is relative to the bot's storage root. Use forward slashes for subdirectories.
---
## Examples
### Basic File Read
```basic
' Read a text file
content = READ "welcome-message.txt"
TALK content
```
### Read from Subdirectory
```basic
' Read file from nested folder
template = READ "templates/email/welcome.html"
```
### Read JSON Data
```basic
' Read and parse JSON configuration
config_text = READ "config.json"
config = JSON_PARSE(config_text)
TALK "Current theme: " + config.theme
```
### Read CSV as Table
```basic
' Load CSV data as structured table
products = READ "inventory/products.csv" AS TABLE
FOR EACH product IN products
TALK product.name + ": $" + product.price
NEXT
```
### Read as Lines
```basic
' Read file as array of lines
faq_lines = READ "faq.txt" AS LINES
TALK "We have " + LEN(faq_lines) + " FAQ entries"
FOR EACH line IN faq_lines
IF INSTR(line, user_question) > 0 THEN
TALK "Found relevant FAQ: " + line
END IF
NEXT
```
### Read PDF Document
```basic
' Extract text from PDF
contract_text = READ "documents/contract.pdf"
TALK "Contract length: " + LEN(contract_text) + " characters"
' Use LLM to analyze
summary = LLM "Summarize the key points of this contract:\n\n" + contract_text
TALK summary
```
### Read Excel Spreadsheet
```basic
' Load Excel data
sales_data = READ "reports/sales-q1.xlsx" AS TABLE
total = 0
FOR EACH row IN sales_data
total = total + row.amount
NEXT
TALK "Total Q1 sales: $" + FORMAT(total, "#,##0.00")
```
---
## Working with Different File Types
### Text Files
```basic
' Plain text - returned as string
notes = READ "notes.txt"
readme = READ "README.md"
```
### JSON Files
```basic
' JSON - returned as string, use JSON_PARSE for object
json_text = READ "data.json"
data = JSON_PARSE(json_text)
```
### CSV Files
```basic
' CSV as string
csv_raw = READ "data.csv"
' CSV as structured table (recommended)
csv_data = READ "data.csv" AS TABLE
first_row = csv_data[0]
```
### Documents
```basic
' PDF - text extracted automatically
pdf_content = READ "report.pdf"
' Word documents - text extracted automatically
doc_content = READ "proposal.docx"
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
content = READ "optional-file.txt"
IF ERROR THEN
PRINT "File not found, using default"
content = "Default content"
END IF
```
### Check File Exists
```basic
' List directory to check if file exists
files = LIST "documents/"
found = false
FOR EACH file IN files
IF file.name = "report.pdf" THEN
found = true
EXIT FOR
END IF
NEXT
IF found THEN
content = READ "documents/report.pdf"
ELSE
TALK "Report not found. Would you like me to generate one?"
END IF
```
---
## Common Use Cases
### Load Email Template
```basic
' Read HTML template and fill variables
template = READ "templates/order-confirmation.html"
' Replace placeholders
email_body = REPLACE(template, "{{customer_name}}", customer.name)
email_body = REPLACE(email_body, "{{order_id}}", order.id)
email_body = REPLACE(email_body, "{{total}}", FORMAT(order.total, "$#,##0.00"))
SEND MAIL customer.email, "Order Confirmation", email_body
```
### Process Data File
```basic
' Read customer list and send personalized messages
customers = READ "campaigns/target-customers.csv" AS TABLE
FOR EACH customer IN customers
IF customer.opted_in = "yes" THEN
message = "Hi " + customer.first_name + ", check out our new products!"
SEND SMS customer.phone, message
END IF
NEXT
TALK "Campaign sent to " + LEN(customers) + " customers"
```
### Load Bot Configuration
```basic
' Read bot settings from file
settings_text = READ "bot-settings.json"
settings = JSON_PARSE(settings_text)
' Apply settings
SET BOT MEMORY "greeting", settings.greeting
SET BOT MEMORY "language", settings.language
SET BOT MEMORY "max_retries", settings.max_retries
```
### Knowledge Base Lookup
```basic
' Read FAQ document for quick lookups
faq_content = READ "knowledge/faq.md"
' Search for relevant section
IF INSTR(user_question, "return") > 0 THEN
' Extract return policy section
start_pos = INSTR(faq_content, "## Return Policy")
end_pos = INSTR(faq_content, "##", start_pos + 1)
policy = MID(faq_content, start_pos, end_pos - start_pos)
TALK policy
END IF
```
---
## File Path Rules
| Path | Description |
|------|-------------|
| `file.txt` | Root of bot's storage |
| `folder/file.txt` | Subdirectory |
| `folder/sub/file.txt` | Nested subdirectory |
| `../file.txt` | **Not allowed** — no parent traversal |
| `/absolute/path` | **Not allowed** — paths are always relative |
---
## Configuration
Configure storage settings in `config.csv`:
```csv
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
drive-read-timeout,30
```
---
## Implementation Notes
- Implemented in Rust under `src/file/mod.rs`
- Automatically detects file encoding (UTF-8, UTF-16, etc.)
- PDF extraction uses `pdf-extract` crate
- DOCX extraction parses XML content
- Maximum file size: 50MB (configurable)
- Files are cached in memory for repeated reads
---
## Related Keywords
- [WRITE](keyword-write.md) — Save content to files
- [LIST](keyword-list.md) — List files in a directory
- [DOWNLOAD](keyword-download.md) — Download files from URLs
- [UPLOAD](keyword-upload.md) — Upload files to storage
- [DELETE FILE](keyword-delete-file.md) — Remove files
- [GET](keyword-get.md) — Read from URLs or files
---
## Summary
`READ` is the primary keyword for accessing stored files. It handles text extraction from various document formats, supports structured data parsing for CSV/Excel files, and integrates seamlessly with the bot's storage system. Use it to load templates, process data files, access configuration, and work with uploaded documents.

View file

@ -1 +1,334 @@
# SET HEADER
The `SET HEADER` keyword configures HTTP request headers for subsequent API calls, enabling authentication, content type specification, and custom headers.
---
## Syntax
```basic
SET HEADER "header-name", "value"
SET HEADER "header-name", ""
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `header-name` | String | The HTTP header name (e.g., "Authorization") |
| `value` | String | The header value (empty string to clear) |
---
## Description
`SET HEADER` configures headers that will be sent with subsequent HTTP requests (GET, POST, PUT, PATCH, DELETE HTTP). Headers persist until explicitly cleared or the script ends.
Common uses include:
- Setting authentication tokens
- Specifying content types
- Adding API keys
- Setting custom request identifiers
- Configuring accept headers
---
## Examples
### Basic Authentication Header
```basic
' Set Bearer token for API authentication
SET HEADER "Authorization", "Bearer " + api_token
' Make authenticated request
result = GET "https://api.example.com/protected/resource"
' Clear header when done
SET HEADER "Authorization", ""
```
### API Key Header
```basic
' Set API key in custom header
SET HEADER "X-API-Key", api_key
result = POST "https://api.service.com/data" WITH
query = user_query
SET HEADER "X-API-Key", ""
```
### Multiple Headers
```basic
' Set multiple headers for a request
SET HEADER "Authorization", "Bearer " + token
SET HEADER "Content-Type", "application/json"
SET HEADER "Accept", "application/json"
SET HEADER "X-Request-ID", request_id
result = POST "https://api.example.com/orders" WITH
product_id = "SKU-001",
quantity = 5
' Clear all headers
SET HEADER "Authorization", ""
SET HEADER "Content-Type", ""
SET HEADER "Accept", ""
SET HEADER "X-Request-ID", ""
```
### Content Type for Form Data
```basic
' Set content type for form submission
SET HEADER "Content-Type", "application/x-www-form-urlencoded"
result = POST "https://api.legacy.com/submit", form_data
SET HEADER "Content-Type", ""
```
---
## Common Headers
| Header | Purpose | Example Value |
|--------|---------|---------------|
| `Authorization` | Authentication | `Bearer token123` |
| `Content-Type` | Request body format | `application/json` |
| `Accept` | Expected response format | `application/json` |
| `X-API-Key` | API key authentication | `key_abc123` |
| `X-Request-ID` | Request tracking/correlation | `req-uuid-here` |
| `User-Agent` | Client identification | `MyBot/1.0` |
| `Accept-Language` | Preferred language | `en-US` |
| `If-Match` | Conditional update (ETag) | `"abc123"` |
| `If-None-Match` | Conditional fetch | `"abc123"` |
---
## Authentication Patterns
### Bearer Token (OAuth2/JWT)
```basic
' Most common for modern APIs
SET HEADER "Authorization", "Bearer " + access_token
result = GET "https://api.service.com/user/profile"
SET HEADER "Authorization", ""
```
### Basic Authentication
```basic
' Encode credentials as Base64
credentials = BASE64_ENCODE(username + ":" + password)
SET HEADER "Authorization", "Basic " + credentials
result = GET "https://api.legacy.com/data"
SET HEADER "Authorization", ""
```
### API Key in Header
```basic
' API key as custom header
SET HEADER "X-API-Key", api_key
' Or in Authorization header
SET HEADER "Authorization", "Api-Key " + api_key
result = POST "https://api.provider.com/query" WITH
question = user_input
```
### Custom Token
```basic
' Some APIs use custom authentication schemes
SET HEADER "X-Auth-Token", auth_token
SET HEADER "X-Client-ID", client_id
result = GET "https://api.custom.com/resources"
```
---
## Common Use Cases
### Authenticated API Call
```basic
' Complete authenticated API interaction
SET HEADER "Authorization", "Bearer " + GET BOT MEMORY "api_token"
SET HEADER "Content-Type", "application/json"
result = POST "https://api.crm.com/leads" WITH
name = customer_name,
email = customer_email,
source = "chatbot"
IF result.id THEN
TALK "Lead created: " + result.id
ELSE
TALK "Error creating lead: " + result.error
END IF
' Always clean up
SET HEADER "Authorization", ""
SET HEADER "Content-Type", ""
```
### Request Tracing
```basic
' Add request ID for debugging/tracing
request_id = GUID()
SET HEADER "X-Request-ID", request_id
SET HEADER "X-Correlation-ID", session.id
PRINT "Request ID: " + request_id
result = POST "https://api.example.com/process" WITH
data = payload
SET HEADER "X-Request-ID", ""
SET HEADER "X-Correlation-ID", ""
```
### Conditional Requests
```basic
' Only fetch if resource changed (using ETag)
SET HEADER "If-None-Match", cached_etag
result = GET "https://api.example.com/data"
IF result.status = 304 THEN
TALK "Data unchanged, using cached version"
ELSE
' Process new data
cached_data = result.data
cached_etag = result.headers.etag
END IF
SET HEADER "If-None-Match", ""
```
---
## Header Persistence
Headers persist across multiple requests until cleared:
```basic
' Set header once
SET HEADER "Authorization", "Bearer " + token
' Used in all these requests
result1 = GET "https://api.example.com/users"
result2 = GET "https://api.example.com/orders"
result3 = POST "https://api.example.com/actions" WITH action = "process"
' Clear when done with authenticated calls
SET HEADER "Authorization", ""
```
---
## Best Practices
1. **Always clear sensitive headers** — Remove authentication headers after use
2. **Use Vault for tokens** — Never hardcode API keys or tokens
3. **Set Content-Type when needed** — JSON is usually the default
4. **Add request IDs** — Helps with debugging and support requests
5. **Check API documentation** — Header names and formats vary by API
```basic
' Good practice pattern
' 1. Get token from secure storage
token = GET BOT MEMORY "api_token"
' 2. Set headers
SET HEADER "Authorization", "Bearer " + token
SET HEADER "X-Request-ID", GUID()
' 3. Make request
result = GET api_url
' 4. Clear sensitive headers
SET HEADER "Authorization", ""
SET HEADER "X-Request-ID", ""
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
' Token might be expired
SET HEADER "Authorization", "Bearer " + old_token
result = GET "https://api.example.com/protected"
IF result.status = 401 THEN
' Token expired, refresh it
TALK "Refreshing authentication..."
new_token = REFRESH_TOKEN(refresh_token)
SET BOT MEMORY "api_token", new_token
SET HEADER "Authorization", "Bearer " + new_token
result = GET "https://api.example.com/protected"
END IF
SET HEADER "Authorization", ""
```
---
## Configuration
HTTP defaults can be set in `config.csv`:
```csv
name,value
http-timeout,30
http-default-content-type,application/json
http-user-agent,GeneralBots/6.1.0
```
---
## Implementation Notes
- Implemented in Rust under `src/web_automation/http.rs`
- Headers are stored in thread-local storage
- Case-insensitive header names (HTTP standard)
- Special characters in values are properly escaped
- Empty string clears the header
---
## Related Keywords
- [GET](keyword-get.md) — Retrieve data from URLs
- [POST](keyword-post.md) — Create new resources
- [PUT](keyword-put.md) — Replace entire resources
- [PATCH](keyword-patch.md) — Partial resource updates
- [DELETE HTTP](keyword-delete-http.md) — Remove resources
- [GRAPHQL](keyword-graphql.md) — GraphQL operations
---
## Summary
`SET HEADER` configures HTTP headers for API requests. Use it to add authentication tokens, specify content types, and include custom headers. Always clear sensitive headers after use and store credentials securely in Vault rather than hardcoding them. Headers persist until explicitly cleared, so you can set them once for multiple related requests.

View file

@ -1 +1,414 @@
# SOAP
The `SOAP` keyword enables bots to communicate with legacy SOAP/XML web services, allowing integration with enterprise systems, government APIs, and older corporate infrastructure that still relies on SOAP protocols.
---
## Syntax
```basic
result = SOAP "wsdl_url", "operation", params
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `wsdl_url` | String | URL to the WSDL file or SOAP endpoint |
| `operation` | String | Name of the SOAP operation to call |
| `params` | Object | Parameters to pass to the operation |
---
## Description
`SOAP` sends a SOAP (Simple Object Access Protocol) request to a web service, automatically building the XML envelope and parsing the response. This enables integration with legacy enterprise systems that haven't migrated to REST APIs.
Use cases include:
- Connecting to government tax and fiscal systems
- Integrating with legacy ERP systems (SAP, Oracle)
- Communicating with banking and payment systems
- Accessing healthcare HL7/SOAP interfaces
- Interfacing with older CRM systems
---
## Examples
### Basic SOAP Request
```basic
' Call a simple SOAP service
result = SOAP "https://api.example.com/service?wsdl", "GetUserInfo", #{
"userId": "12345"
}
TALK "User name: " + result.name
```
### Tax Calculation Service
```basic
' Brazilian NF-e fiscal service example
nfe_params = #{
"CNPJ": company_cnpj,
"InvoiceNumber": invoice_number,
"Items": invoice_items,
"TotalValue": total_value
}
result = SOAP "https://nfe.fazenda.gov.br/NFeAutorizacao4/NFeAutorizacao4.asmx?wsdl",
"NfeAutorizacao",
nfe_params
IF result.status = "Authorized" THEN
TALK "Invoice authorized! Protocol: " + result.protocol
ELSE
TALK "Authorization failed: " + result.errorMessage
END IF
```
### Currency Exchange Service
```basic
' Get exchange rates from central bank
params = #{
"fromCurrency": "USD",
"toCurrency": "BRL",
"date": FORMAT(NOW(), "YYYY-MM-DD")
}
result = SOAP "https://www.bcb.gov.br/webservice/cotacao.asmx?wsdl",
"GetCotacao",
params
rate = result.cotacao.valor
TALK "Today's USD/BRL rate: " + rate
```
### Weather Service (Legacy)
```basic
' Access legacy weather SOAP service
weather_params = #{
"city": city_name,
"country": "BR"
}
result = SOAP "https://weather.example.com/service.asmx?wsdl",
"GetWeather",
weather_params
TALK "Weather in " + city_name + ": " + result.description
TALK "Temperature: " + result.temperature + "°C"
```
### SAP Integration
```basic
' Query SAP for material information
sap_params = #{
"MaterialNumber": material_code,
"Plant": "1000"
}
result = SOAP "https://sap.company.com:8443/sap/bc/srt/wsdl/MATERIAL_INFO?wsdl",
"GetMaterialDetails",
sap_params
material = result.MaterialData
TALK "Material: " + material.Description
TALK "Stock: " + material.AvailableStock + " units"
TALK "Price: $" + material.StandardPrice
```
---
## Working with Complex Types
### Nested Objects
```basic
' SOAP request with nested structure
customer_data = #{
"Customer": #{
"Name": customer_name,
"Address": #{
"Street": street,
"City": city,
"ZipCode": zipcode,
"Country": "BR"
},
"Contact": #{
"Email": email,
"Phone": phone
}
}
}
result = SOAP "https://crm.company.com/CustomerService.asmx?wsdl",
"CreateCustomer",
customer_data
TALK "Customer created with ID: " + result.CustomerId
```
### Array Parameters
```basic
' Send multiple items in SOAP request
order_items = [
#{ "SKU": "PROD-001", "Quantity": 2, "Price": 99.99 },
#{ "SKU": "PROD-002", "Quantity": 1, "Price": 49.99 },
#{ "SKU": "PROD-003", "Quantity": 5, "Price": 19.99 }
]
order_params = #{
"OrderHeader": #{
"CustomerId": customer_id,
"OrderDate": FORMAT(NOW(), "YYYY-MM-DD")
},
"OrderItems": order_items
}
result = SOAP "https://erp.company.com/OrderService?wsdl",
"CreateOrder",
order_params
TALK "Order " + result.OrderNumber + " created successfully!"
```
---
## Response Handling
### Parsing Complex Responses
```basic
' Handle structured SOAP response
result = SOAP "https://api.example.com/InvoiceService?wsdl",
"GetInvoices",
#{ "CustomerId": customer_id, "Year": 2024 }
' Access nested response data
FOR EACH invoice IN result.Invoices.Invoice
TALK "Invoice #" + invoice.Number + " - $" + invoice.Total
TALK " Date: " + invoice.Date
TALK " Status: " + invoice.Status
END FOR
```
### Checking Response Status
```basic
result = SOAP service_url, operation, params
IF result.ResponseCode = "0" OR result.Success = true THEN
TALK "Operation completed successfully"
' Process result data
ELSE
TALK "Operation failed: " + result.ErrorMessage
END IF
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
result = SOAP "https://legacy.system.com/service.asmx?wsdl",
"ProcessPayment",
payment_params
IF ERROR THEN
error_msg = ERROR_MESSAGE
IF INSTR(error_msg, "timeout") > 0 THEN
TALK "The service is taking too long. Please try again."
ELSE IF INSTR(error_msg, "WSDL") > 0 THEN
TALK "Cannot connect to the service. It may be down."
ELSE IF INSTR(error_msg, "authentication") > 0 THEN
TALK "Authentication failed. Please check credentials."
ELSE
TALK "Service error: " + error_msg
END IF
ELSE
IF result.TransactionId THEN
TALK "Payment processed! Transaction: " + result.TransactionId
END IF
END IF
```
### Common Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `WSDL_PARSE_ERROR` | Invalid WSDL format | Verify WSDL URL and format |
| `SOAP_FAULT` | Service returned fault | Check error message from service |
| `TIMEOUT` | Request took too long | Increase timeout or retry |
| `CONNECTION_ERROR` | Cannot reach service | Check network and URL |
| `AUTHENTICATION_ERROR` | Invalid credentials | Verify authentication headers |
---
## Authentication
### WS-Security (Username Token)
```basic
' For services requiring WS-Security authentication
' Set credentials via SET HEADER before SOAP call
SET HEADER "Authorization", "Basic " + BASE64(username + ":" + password)
result = SOAP service_url, operation, params
```
### Certificate-Based Authentication
For services requiring client certificates, configure at the system level in `config.csv`:
```csv
name,value
soap-client-cert,/path/to/client.pem
soap-client-key,/path/to/client.key
soap-ca-cert,/path/to/ca.pem
```
---
## Practical Examples
### Brazilian NFe (Electronic Invoice)
```basic
' Emit electronic invoice to Brazilian tax authority
nfe_data = #{
"infNFe": #{
"ide": #{
"cUF": "35",
"natOp": "VENDA",
"serie": "1",
"nNF": invoice_number
},
"emit": #{
"CNPJ": company_cnpj,
"xNome": company_name
},
"dest": #{
"CNPJ": customer_cnpj,
"xNome": customer_name
},
"det": invoice_items,
"total": #{
"vNF": total_value
}
}
}
result = SOAP "https://nfe.fazenda.sp.gov.br/ws/NFeAutorizacao4.asmx?wsdl",
"nfeAutorizacaoLote",
nfe_data
IF result.cStat = "100" THEN
TALK "NFe authorized! Key: " + result.chNFe
ELSE
TALK "Error: " + result.xMotivo
END IF
```
### Healthcare HL7/SOAP
```basic
' Query patient information from healthcare system
patient_query = #{
"PatientId": patient_id,
"IncludeHistory": true
}
result = SOAP "https://hospital.example.com/PatientService?wsdl",
"GetPatientRecord",
patient_query
TALK "Patient: " + result.Patient.Name
TALK "DOB: " + result.Patient.DateOfBirth
TALK "Allergies: " + JOIN(result.Patient.Allergies, ", ")
```
### Legacy CRM Integration
```basic
' Update customer in legacy Siebel CRM
update_data = #{
"AccountId": account_id,
"AccountName": new_name,
"PrimaryContact": #{
"FirstName": first_name,
"LastName": last_name,
"Email": email
},
"UpdatedBy": bot_user
}
result = SOAP "https://siebel.company.com/eai_enu/start.swe?SWEExtSource=WebService&wsdl",
"AccountUpdate",
update_data
TALK "CRM updated. Transaction ID: " + result.TransactionId
```
---
## SOAP vs REST
| Aspect | SOAP | REST |
|--------|------|------|
| Protocol | XML-based | JSON typically |
| Standards | WS-Security, WS-*, WSDL | OpenAPI, OAuth |
| Use Case | Enterprise, legacy | Modern APIs |
| Keyword | `SOAP` | `POST`, `GET` |
| Complexity | Higher | Lower |
**When to use SOAP:**
- Integrating with legacy enterprise systems
- Government/fiscal APIs requiring SOAP
- Systems with strict WS-Security requirements
- Banking and financial services
- Healthcare systems (HL7 SOAP)
---
## Configuration
No specific configuration required. The keyword handles SOAP envelope construction automatically.
For services requiring custom SOAP headers or namespaces, these are inferred from the WSDL.
---
## Implementation Notes
- Implemented in Rust under `src/basic/keywords/http_operations.rs`
- Automatically fetches and parses WSDL
- Builds SOAP envelope from parameters
- Parses XML response into JSON-like object
- Timeout: 120 seconds by default
- Supports SOAP 1.1 and 1.2
---
## Related Keywords
- [POST](keyword-post.md) — For REST API calls
- [GET](keyword-get.md) — For REST GET requests
- [GRAPHQL](keyword-graphql.md) — For GraphQL APIs
- [SET HEADER](keyword-set-header.md) — Set authentication headers
---
## Summary
`SOAP` enables integration with legacy SOAP/XML web services that are still common in enterprise, government, and healthcare sectors. While REST is preferred for modern APIs, SOAP remains essential for connecting to fiscal systems (NFe, tax services), legacy ERPs (SAP, Oracle), and older enterprise infrastructure. The keyword handles XML envelope construction and parsing automatically, making SOAP integration as simple as REST calls.

View file

@ -0,0 +1,381 @@
# START MEET / JOIN MEET Keywords
The `START MEET` and `JOIN MEET` keywords enable bots to create and participate in video meetings, bringing AI capabilities directly into video conferencing.
## Keywords
| Keyword | Purpose |
|---------|---------|
| `START MEET` | Create a new meeting room and get join link |
| `JOIN MEET` | Add the bot to an existing meeting |
| `LEAVE MEET` | Remove the bot from a meeting |
| `INVITE TO MEET` | Send meeting invitations to participants |
---
## START MEET
Creates a new video meeting room and optionally adds the bot as a participant.
### Syntax
```basic
room = START MEET "room-name"
room = START MEET "room-name" WITH BOT
room = START MEET "room-name" WITH OPTIONS options
```
### Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `room-name` | String | Display name for the meeting room |
| `WITH BOT` | Flag | Automatically add the bot to the meeting |
| `options` | JSON | Meeting configuration options |
### Options Object
```basic
' Options can be set as a JSON string
options = '{"recording": true, "transcription": true, "max_participants": 50}'
```
### Example
```basic
' Create a simple meeting
room = START MEET "Team Sync"
TALK "Meeting created! Join here: " + room.url
' Create meeting with bot participant
room = START MEET "AI-Assisted Workshop" WITH BOT
TALK "I've joined the meeting and I'm ready to help!"
TALK "Join link: " + room.url
' Create meeting with full options
options = '{"recording": true, "transcription": true, "bot_persona": "note-taker"}'
room = START MEET "Project Review" WITH OPTIONS options
```
### Return Value
Returns a room object with:
| Property | Description |
|----------|-------------|
| `room.id` | Unique room identifier |
| `room.url` | Join URL for participants |
| `room.name` | Room display name |
| `room.created` | Creation timestamp |
| `room.host_token` | Host access token |
---
## JOIN MEET
Adds the bot to an existing meeting room.
### Syntax
```basic
JOIN MEET room_id
JOIN MEET room_id AS "persona"
JOIN MEET room_url
```
### Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `room_id` | String | Meeting room ID |
| `room_url` | String | Meeting join URL |
| `persona` | String | Bot's display name in the meeting |
### Example
```basic
' Join by room ID
JOIN MEET "room-abc123"
' Join with custom persona
JOIN MEET "room-abc123" AS "Meeting Assistant"
' Join by URL
JOIN MEET "https://meet.gb/abc-123"
' Join and announce
JOIN MEET meeting_room AS "AI Note Taker"
TALK TO MEET "Hello everyone! I'm here to take notes. Just say 'note that' followed by anything important."
```
---
## LEAVE MEET
Removes the bot from the current meeting.
### Syntax
```basic
LEAVE MEET
LEAVE MEET room_id
```
### Example
```basic
' Leave current meeting
LEAVE MEET
' Leave specific meeting (when bot is in multiple)
LEAVE MEET "room-abc123"
' Graceful exit
TALK TO MEET "Thanks everyone! I'll send the meeting notes shortly."
WAIT 2
LEAVE MEET
```
---
## INVITE TO MEET
Sends meeting invitations to participants.
### Syntax
```basic
INVITE TO MEET room, participants
INVITE TO MEET room, participants, message
```
### Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `room` | Object/String | Room object or room ID |
| `participants` | Array | List of email addresses |
| `message` | String | Optional custom invitation message |
### Example
```basic
' Create room and invite team
room = START MEET "Sprint Planning" WITH BOT
participants = ["alice@company.com", "bob@company.com", "carol@company.com"]
INVITE TO MEET room, participants
TALK "Invitations sent to " + LEN(participants) + " participants"
' With custom message
INVITE TO MEET room, participants, "Join us for sprint planning! The AI assistant will be taking notes."
```
---
## TALK TO MEET
Sends a message to all meeting participants (text-to-speech or chat).
### Syntax
```basic
TALK TO MEET "message"
TALK TO MEET "message" AS CHAT
TALK TO MEET "message" AS VOICE
```
### Example
```basic
' Send as both chat and voice (default)
TALK TO MEET "Let's start with the agenda review."
' Chat only (no voice)
TALK TO MEET "Here's the link to the document: https://..." AS CHAT
' Voice only (no chat message)
TALK TO MEET "I've noted that action item." AS VOICE
```
---
## HEAR FROM MEET
Listens for speech or chat messages from meeting participants.
### Syntax
```basic
HEAR FROM MEET INTO variable
HEAR FROM MEET INTO variable TIMEOUT seconds
```
### Example
```basic
' Listen for meeting input
HEAR FROM MEET INTO participant_message
IF INSTR(participant_message, "note that") > 0 THEN
note = REPLACE(participant_message, "note that", "")
notes = notes + "\n- " + note
TALK TO MEET "Got it! I've noted: " + note
END IF
```
---
## Complete Example: AI Meeting Assistant
```basic
' AI Meeting Assistant Bot
' Joins meetings, takes notes, and provides summaries
TALK "Would you like me to join your meeting? Share the room ID or say 'create new'."
HEAR user_input
IF user_input = "create new" THEN
TALK "What should we call this meeting?"
HEAR meeting_name
room = START MEET meeting_name WITH BOT
TALK "Meeting created! Share this link: " + room.url
TALK "Who should I invite? (comma-separated emails, or 'skip')"
HEAR invites
IF invites <> "skip" THEN
participants = SPLIT(invites, ",")
INVITE TO MEET room, participants
TALK "Invitations sent!"
END IF
ELSE
room_id = user_input
JOIN MEET room_id AS "AI Assistant"
TALK "I've joined the meeting!"
END IF
' Initialize notes
notes = "# Meeting Notes\n\n"
notes = notes + "**Date:** " + FORMAT(NOW(), "YYYY-MM-DD HH:mm") + "\n\n"
notes = notes + "## Key Points\n\n"
TALK TO MEET "Hello! I'm your AI assistant. Say 'note that' to capture important points, or 'summarize' when you're done."
' Meeting loop
meeting_active = true
WHILE meeting_active
HEAR FROM MEET INTO message TIMEOUT 300
IF message = "" THEN
' Timeout - check if meeting still active
CONTINUE
END IF
' Process commands
IF INSTR(LOWER(message), "note that") > 0 THEN
note_content = REPLACE(LOWER(message), "note that", "")
notes = notes + "- " + TRIM(note_content) + "\n"
TALK TO MEET "Noted!" AS VOICE
ELSE IF INSTR(LOWER(message), "action item") > 0 THEN
action = REPLACE(LOWER(message), "action item", "")
notes = notes + "- **ACTION:** " + TRIM(action) + "\n"
TALK TO MEET "Action item recorded!" AS VOICE
ELSE IF INSTR(LOWER(message), "summarize") > 0 THEN
' Generate AI summary
summary = LLM "Summarize these meeting notes concisely:\n\n" + notes
TALK TO MEET "Here's the summary: " + summary
ELSE IF INSTR(LOWER(message), "end meeting") > 0 THEN
meeting_active = false
END IF
WEND
' Save and share notes
filename = "meeting-notes-" + FORMAT(NOW(), "YYYYMMDD-HHmm") + ".md"
SAVE notes TO filename
TALK TO MEET "Meeting ended. I'll send the notes to all participants."
LEAVE MEET
' Email notes to participants
SEND MAIL participants, "Meeting Notes: " + meeting_name, notes
TALK "Notes saved and sent to all participants!"
```
---
## Example: Quick Standup Bot
```basic
' Daily Standup Bot
room = START MEET "Daily Standup" WITH BOT
team = ["dev1@company.com", "dev2@company.com", "dev3@company.com"]
INVITE TO MEET room, team, "Time for standup! Join now."
TALK TO MEET "Good morning team! Let's do a quick round. I'll call on each person."
updates = ""
FOR EACH member IN team
TALK TO MEET member + ", what did you work on yesterday and what's planned for today?"
HEAR FROM MEET INTO update TIMEOUT 120
updates = updates + "**" + member + ":** " + update + "\n\n"
NEXT
TALK TO MEET "Great standup everyone! I'll post the summary to Slack."
' Post to Slack
POST "https://slack.com/api/chat.postMessage" WITH
channel = "#dev-standup",
text = "📋 **Standup Summary**\n\n" + updates
LEAVE MEET
```
---
## Configuration
Configure Meet integration in `config.csv`:
```csv
name,value
meet-provider,livekit
meet-server-url,wss://localhost:7880
meet-api-key,vault:gbo/meet/api_key
meet-api-secret,vault:gbo/meet/api_secret
meet-bot-default-persona,AI Assistant
meet-recording-enabled,true
meet-transcription-enabled,true
meet-max-participants,50
```
---
## Bot Capabilities in Meetings
When a bot joins a meeting, it can:
| Capability | Description |
|------------|-------------|
| **Listen** | Transcribe speech from participants |
| **Speak** | Text-to-speech announcements |
| **Chat** | Send text messages to meeting chat |
| **Record** | Capture meeting recording |
| **Screen Share** | Display content (dashboards, docs) |
| **React** | Send emoji reactions |
---
## See Also
- [Meet App](../chapter-04-gbui/apps/meet.md) - User interface for Meet
- [BOOK_MEETING](./keyword-book.md) - Schedule meetings with calendar integration
- [Calls API](../chapter-10-rest/calls-api.md) - API reference for video calls
- [Multi-Agent Keywords](./keywords-multi-agent.md) - Bot collaboration features

View file

@ -1 +1,319 @@
# UPDATE
The `UPDATE` keyword modifies existing records in database tables, enabling bots to change stored data based on conditions.
---
## Syntax
```basic
UPDATE "table_name" SET field1 = value1 WHERE condition
UPDATE "table_name" SET field1 = value1, field2 = value2 WHERE condition
UPDATE "table_name" ON connection SET field1 = value1 WHERE condition
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `table_name` | String | Name of the target database table |
| `SET` | Clause | Field-value pairs to update |
| `WHERE` | Clause | Condition to select records to update |
| `ON connection` | String | Optional named database connection |
---
## Description
`UPDATE` modifies existing records in a database table that match the specified `WHERE` condition. The `SET` clause specifies which fields to change and their new values. Without a `WHERE` clause, all records in the table would be updated (which is usually not desired).
Use cases include:
- Updating user profiles
- Changing order status
- Recording timestamps for actions
- Incrementing counters
- Marking items as read/processed
---
## Examples
### Basic Update
```basic
' Update a customer's email
UPDATE "customers" SET email = "new.email@example.com" WHERE id = 123
TALK "Email updated successfully!"
```
### Update Multiple Fields
```basic
' Update multiple fields at once
UPDATE "orders" SET
status = "shipped",
shipped_at = NOW(),
tracking_number = tracking_id
WHERE id = order_id
TALK "Order #" + order_id + " marked as shipped"
```
### Update with Variable Values
```basic
' Update from conversation data
TALK "What is your new phone number?"
HEAR new_phone
UPDATE "customers" SET phone = new_phone WHERE id = user.id
TALK "Your phone number has been updated to " + new_phone
```
### Increment Counter
```basic
' Increment a counter field
UPDATE "products" SET view_count = view_count + 1 WHERE id = product_id
```
### Update Based on Condition
```basic
' Mark old sessions as expired
UPDATE "sessions" SET
status = "expired",
expired_at = NOW()
WHERE last_activity < DATEADD(NOW(), -30, "minute")
TALK "Inactive sessions have been expired"
```
### Update with Named Connection
```basic
' Update on specific database
UPDATE "audit_log" ON "analytics_db" SET
reviewed = true,
reviewed_by = admin.id
WHERE id = log_entry_id
```
---
## Common Use Cases
### Update User Profile
```basic
' User wants to update their profile
TALK "What would you like to update? (name, email, phone)"
HEAR field_to_update
TALK "What is the new value?"
HEAR new_value
SWITCH field_to_update
CASE "name"
UPDATE "users" SET name = new_value WHERE id = user.id
CASE "email"
UPDATE "users" SET email = new_value WHERE id = user.id
CASE "phone"
UPDATE "users" SET phone = new_value WHERE id = user.id
CASE ELSE
TALK "Unknown field. Please choose name, email, or phone."
END SWITCH
TALK "Your " + field_to_update + " has been updated!"
```
### Change Order Status
```basic
' Update order through its lifecycle
UPDATE "orders" SET
status = "processing",
processed_at = NOW()
WHERE id = order_id AND status = "pending"
TALK "Order is now being processed"
```
### Mark as Read
```basic
' Mark notification as read
UPDATE "notifications" SET
read = true,
read_at = NOW()
WHERE user_id = user.id AND id = notification_id
TALK "Notification marked as read"
```
### Record Last Activity
```basic
' Update last activity timestamp
UPDATE "users" SET last_active = NOW() WHERE id = user.id
```
### Soft Delete
```basic
' Soft delete (mark as deleted without removing)
UPDATE "records" SET
deleted = true,
deleted_at = NOW(),
deleted_by = user.id
WHERE id = record_id
TALK "Record archived"
```
### Batch Update
```basic
' Update multiple records matching condition
UPDATE "subscriptions" SET
status = "active",
renewed_at = NOW()
WHERE expires_at > NOW() AND auto_renew = true
TALK "Active subscriptions renewed"
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
UPDATE "customers" SET email = new_email WHERE id = customer_id
IF ERROR THEN
PRINT "Update failed: " + ERROR_MESSAGE
IF INSTR(ERROR_MESSAGE, "duplicate") > 0 THEN
TALK "This email is already in use by another account."
ELSE IF INSTR(ERROR_MESSAGE, "constraint") > 0 THEN
TALK "The value you entered is not valid."
ELSE
TALK "Sorry, I couldn't update your information. Please try again."
END IF
ELSE
TALK "Information updated successfully!"
END IF
```
### Common Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `DUPLICATE_KEY` | Unique constraint violated | Value already exists |
| `CHECK_VIOLATION` | Value fails check constraint | Validate before update |
| `NOT_NULL_VIOLATION` | Setting required field to null | Provide a value |
| `NO_ROWS_AFFECTED` | WHERE matched no records | Verify condition |
---
## Safety Considerations
### Always Use WHERE Clause
```basic
' DANGEROUS - updates ALL records!
' UPDATE "users" SET status = "inactive"
' SAFE - updates only matching records
UPDATE "users" SET status = "inactive" WHERE last_login < "2024-01-01"
```
### Verify Before Update
```basic
' Check record exists before updating
record = FIND "orders" WHERE id = order_id
IF record THEN
UPDATE "orders" SET status = "cancelled" WHERE id = order_id
TALK "Order cancelled"
ELSE
TALK "Order not found"
END IF
```
### Limit Scope
```basic
' Update only records the user owns
UPDATE "documents" SET
title = new_title
WHERE id = document_id AND owner_id = user.id
```
---
## UPDATE vs MERGE
| Keyword | Purpose | Use When |
|---------|---------|----------|
| `UPDATE` | Modify existing records | Record definitely exists |
| `MERGE` | Insert or update | Record may or may not exist |
```basic
' UPDATE - Only modifies if exists
UPDATE "users" SET name = "John" WHERE email = "john@example.com"
' MERGE - Creates if not exists, updates if exists
MERGE INTO "users" ON email = "john@example.com" WITH
email = "john@example.com",
name = "John"
```
---
## Configuration
Database connection is configured in `config.csv`:
```csv
name,value
database-provider,postgres
database-pool-size,10
database-timeout,30
```
Database credentials are stored in Vault, not in config files.
---
## Implementation Notes
- Implemented in Rust under `src/database/operations.rs`
- Uses parameterized queries to prevent SQL injection
- Returns number of affected rows
- WHERE clause is required by default for safety
- Supports all comparison operators (=, <, >, <=, >=, <>)
- Supports AND/OR in WHERE conditions
---
## Related Keywords
- [INSERT](keyword-insert.md) — Add new records
- [DELETE](keyword-delete.md) — Remove records
- [MERGE](keyword-merge.md) — Insert or update (upsert)
- [FIND](keyword-find.md) — Query records
- [TABLE](keyword-table.md) — Create tables
---
## Summary
`UPDATE` modifies existing database records that match a WHERE condition. Use it to change user data, update statuses, record timestamps, and modify stored information. Always include a WHERE clause to avoid accidentally updating all records. For cases where you're unsure if a record exists, consider using `MERGE` instead.

View file

@ -1 +1,301 @@
# UPLOAD
The `UPLOAD` keyword transfers files from external URLs or local paths to the bot's drive storage, enabling bots to collect documents, images, and other files from users or external sources.
---
## Syntax
```basic
result = UPLOAD url
result = UPLOAD url TO "destination"
result = UPLOAD url TO "destination" AS "filename"
UPLOAD file_data TO "destination"
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `url` | String | Source URL to download and upload |
| `destination` | String | Target folder in bot's storage |
| `filename` | String | Custom filename (optional) |
| `file_data` | Binary | File data from user input or API response |
---
## Description
`UPLOAD` retrieves a file from a URL or accepts file data and stores it in the bot's configured storage (drive bucket). It supports:
- Downloading files from external URLs
- Accepting file uploads from chat users
- Storing API response attachments
- Organizing files into folders
- Automatic filename detection or custom naming
The destination path is relative to the bot's storage root. Directories are created automatically if they don't exist.
---
## Examples
### Basic URL Upload
```basic
' Download and store a file from URL
result = UPLOAD "https://example.com/report.pdf"
TALK "File saved as: " + result.filename
```
### Upload to Specific Folder
```basic
' Upload to a specific directory
result = UPLOAD "https://cdn.example.com/image.png" TO "images/products"
TALK "Image stored at: " + result.path
```
### Upload with Custom Filename
```basic
' Upload with a custom name
result = UPLOAD "https://api.example.com/export/data" TO "exports" AS "monthly-report.xlsx"
TALK "Report saved as: " + result.filename
```
### Handle User File Upload
```basic
' When user sends a file via WhatsApp/chat
TALK "Please send me the document you'd like to upload."
HEAR user_file
IF user_file.type = "file" THEN
result = UPLOAD user_file TO "user-uploads/" + user.id
TALK "Got it! I've saved your file: " + result.filename
ELSE
TALK "That doesn't look like a file. Please try again."
END IF
```
### Upload from API Response
```basic
' Download attachment from external API
invoice_url = GET "https://api.billing.com/invoices/" + invoice_id + "/pdf"
result = UPLOAD invoice_url.download_url TO "invoices/" + customer_id
TALK "Invoice downloaded and saved!"
SEND MAIL customer_email, "Your Invoice", "Please find your invoice attached.", result.path
```
---
## Return Value
`UPLOAD` returns an object with:
| Property | Description |
|----------|-------------|
| `result.path` | Full path in storage |
| `result.filename` | Name of the saved file |
| `result.size` | File size in bytes |
| `result.type` | MIME type of the file |
| `result.url` | Internal URL to access the file |
---
## Common Use Cases
### Collect User Documents
```basic
' Document collection flow
TALK "I need a few documents to process your application."
TALK "First, please upload your ID document."
HEAR id_doc
id_result = UPLOAD id_doc TO "applications/" + application_id + "/documents" AS "id-document"
TALK "Great! Now please upload proof of address."
HEAR address_doc
address_result = UPLOAD address_doc TO "applications/" + application_id + "/documents" AS "proof-of-address"
TALK "Thank you! I've received:"
TALK "✓ ID Document: " + id_result.filename
TALK "✓ Proof of Address: " + address_result.filename
```
### Archive External Content
```basic
' Download and archive web content
urls = [
"https://example.com/report-2024.pdf",
"https://example.com/report-2025.pdf"
]
FOR EACH url IN urls
result = UPLOAD url TO "archive/reports"
TALK "Archived: " + result.filename
NEXT
TALK "All reports archived successfully!"
```
### Profile Photo Upload
```basic
TALK "Would you like to update your profile photo? Send me an image."
HEAR photo
IF photo.type = "image" THEN
result = UPLOAD photo TO "profiles" AS user.id + "-avatar"
SET USER MEMORY "avatar_url", result.url
TALK "Profile photo updated! Looking good! 📸"
ELSE
TALK "Please send an image file."
END IF
```
### Backup External Data
```basic
' Backup data from external service
backup_url = "https://api.service.com/export?format=json&date=" + FORMAT(NOW(), "YYYY-MM-DD")
SET HEADER "Authorization", "Bearer " + api_token
result = UPLOAD backup_url TO "backups" AS "backup-" + FORMAT(NOW(), "YYYYMMDD") + ".json"
TALK "Backup complete: " + FORMAT(result.size / 1024, "#,##0") + " KB"
```
### Receipt Collection
```basic
' Expense report receipt upload
TALK "Please upload your receipt for the expense."
HEAR receipt
result = UPLOAD receipt TO "expenses/" + expense_id + "/receipts"
' Update expense record
UPDATE "expenses" SET receipt_path = result.path WHERE id = expense_id
TALK "Receipt attached to expense #" + expense_id
```
---
## Supported File Types
| Category | Extensions |
|----------|------------|
| Documents | `.pdf`, `.docx`, `.doc`, `.txt`, `.md`, `.rtf` |
| Spreadsheets | `.xlsx`, `.xls`, `.csv` |
| Images | `.jpg`, `.jpeg`, `.png`, `.gif`, `.webp`, `.svg` |
| Archives | `.zip`, `.tar`, `.gz`, `.rar` |
| Audio | `.mp3`, `.wav`, `.ogg`, `.m4a` |
| Video | `.mp4`, `.mov`, `.avi`, `.webm` |
| Data | `.json`, `.xml`, `.yaml` |
---
## Error Handling
```basic
ON ERROR RESUME NEXT
result = UPLOAD "https://example.com/large-file.zip" TO "downloads"
IF ERROR THEN
PRINT "Upload failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't download that file. The server might be unavailable."
ELSE IF result.size > 50000000 THEN
TALK "Warning: This is a large file (" + FORMAT(result.size / 1048576, "#,##0") + " MB)"
ELSE
TALK "File uploaded successfully!"
END IF
```
### Validate File Type
```basic
HEAR user_file
allowed_types = ["application/pdf", "image/jpeg", "image/png"]
IF NOT CONTAINS(allowed_types, user_file.mime_type) THEN
TALK "Sorry, I only accept PDF and image files."
ELSE
result = UPLOAD user_file TO "uploads"
TALK "File accepted!"
END IF
```
---
## Size Limits
| Limit | Default | Configurable |
|-------|---------|--------------|
| Maximum file size | 50 MB | Yes |
| Maximum files per folder | 10,000 | Yes |
| Total storage per bot | 10 GB | Yes |
---
## Configuration
Configure upload settings in `config.csv`:
```csv
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
upload-max-size,52428800
upload-allowed-types,pdf,docx,xlsx,jpg,png
upload-timeout,120
```
---
## Security Considerations
- Files are scanned for malware before storage
- Executable files (`.exe`, `.sh`, `.bat`) are blocked by default
- File paths are sanitized to prevent directory traversal
- Original filenames are preserved but sanitized
- Large files are chunked for reliable upload
---
## Implementation Notes
- Implemented in Rust under `src/file/mod.rs`
- Uses streaming upload for large files
- Supports resume for interrupted uploads
- Automatic retry on network failures (up to 3 attempts)
- Progress tracking available for large files
- Deduplication based on content hash (optional)
---
## Related Keywords
- [DOWNLOAD](keyword-download.md) — Download files to user
- [READ](keyword-read.md) — Read file contents
- [WRITE](keyword-write.md) — Write content to files
- [LIST](keyword-list.md) — List files in storage
- [DELETE FILE](keyword-delete-file.md) — Remove files
- [COPY](keyword-copy.md) — Copy files within storage
---
## Summary
`UPLOAD` is essential for collecting files from users and external sources. Use it to accept document uploads, archive web content, collect receipts and photos, and store API response attachments. Combined with folder organization and custom naming, it provides flexible file collection for any bot workflow.

View file

@ -1 +1,324 @@
# WRITE
The `WRITE` keyword saves content to files in the bot's drive storage, enabling bots to create documents, export data, and persist information.
---
## Syntax
```basic
WRITE content TO "filename"
WRITE data TO "filename.csv" AS TABLE
WRITE lines TO "filename.txt" AS LINES
WRITE content TO "filename" APPEND
```
---
## Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `content` | String | The content to write to the file |
| `filename` | String | Path to the file in the bot's storage |
| `AS TABLE` | Flag | Write structured data as CSV format |
| `AS LINES` | Flag | Write array as separate lines |
| `APPEND` | Flag | Add to existing file instead of overwriting |
---
## Description
`WRITE` saves content to the bot's configured storage (drive bucket). It supports:
- Text files (`.txt`, `.md`, `.json`, `.xml`, `.csv`)
- Creating new files or overwriting existing ones
- Appending to existing files
- Writing structured data as CSV
- Automatic directory creation
The file path is relative to the bot's storage root. Use forward slashes for subdirectories.
---
## Examples
### Basic File Write
```basic
' Write a simple text file
message = "Welcome to our service!"
WRITE message TO "welcome.txt"
TALK "File saved successfully!"
```
### Write to Subdirectory
```basic
' Write file to nested folder (directories created automatically)
report = "Monthly Report\n\nSales: $10,000\nExpenses: $3,000"
WRITE report TO "reports/2025/january.md"
```
### Write JSON Data
```basic
' Create JSON configuration file
config_json = '{"theme": "dark", "language": "en", "notifications": true}'
WRITE config_json TO "settings.json"
```
### Write CSV as Table
```basic
' Export data as CSV - use FIND to get data from database
orders = FIND "orders" WHERE status = "completed" LIMIT 100
WRITE orders TO "exports/orders.csv" AS TABLE
TALK "Exported " + LEN(orders) + " orders to CSV"
```
### Write Lines
```basic
' Write array as separate lines
log_entries = [
"2025-01-15 10:00 - User logged in",
"2025-01-15 10:05 - Order placed",
"2025-01-15 10:10 - Payment processed"
]
WRITE log_entries TO "logs/activity.log" AS LINES
```
### Append to File
```basic
' Add entry to existing log file
new_entry = FORMAT(NOW(), "YYYY-MM-DD HH:mm") + " - " + event_description + "\n"
WRITE new_entry TO "logs/events.log" APPEND
```
---
## Common Use Cases
### Generate Report
```basic
' Create a formatted report
report = "# Sales Report\n\n"
report = report + "**Date:** " + FORMAT(NOW(), "MMMM DD, YYYY") + "\n\n"
report = report + "## Summary\n\n"
report = report + "- Total Sales: $" + FORMAT(total_sales, "#,##0.00") + "\n"
report = report + "- Orders: " + order_count + "\n"
report = report + "- Average Order: $" + FORMAT(total_sales / order_count, "#,##0.00") + "\n"
filename = "reports/sales-" + FORMAT(NOW(), "YYYYMMDD") + ".md"
WRITE report TO filename
TALK "Report saved to " + filename
```
### Export Customer Data
```basic
' Export customer list to CSV
customers = FIND "customers" WHERE status = "active"
WRITE customers TO "exports/active-customers.csv" AS TABLE
' Email the export
SEND MAIL "manager@company.com", "Customer Export", "See attached file", "exports/active-customers.csv"
```
### Save Meeting Notes
```basic
' Save notes from a conversation
notes = "# Meeting Notes\n\n"
notes = notes + "**Date:** " + FORMAT(NOW(), "YYYY-MM-DD HH:mm") + "\n"
notes = notes + "**Participants:** " + participants + "\n\n"
notes = notes + "## Discussion\n\n"
notes = notes + meeting_content + "\n\n"
notes = notes + "## Action Items\n\n"
notes = notes + action_items
filename = "meetings/" + FORMAT(NOW(), "YYYYMMDD") + "-" + meeting_topic + ".md"
WRITE notes TO filename
TALK "Meeting notes saved!"
```
### Create Backup
```basic
' Backup current data
data = GET BOT MEMORY "important_data"
backup_name = "backups/data-" + FORMAT(NOW(), "YYYYMMDD-HHmmss") + ".json"
WRITE JSON_STRINGIFY(data) TO backup_name
TALK "Backup created: " + backup_name
```
### Build Log File
```basic
' Append to daily log
log_line = FORMAT(NOW(), "HH:mm:ss") + " | " + user_id + " | " + action + " | " + details
log_file = "logs/" + FORMAT(NOW(), "YYYY-MM-DD") + ".log"
WRITE log_line + "\n" TO log_file APPEND
```
### Generate HTML Page
```basic
' Create a simple HTML report
html = "<!DOCTYPE html>\n"
html = html + "<html><head><title>Report</title></head>\n"
html = html + "<body>\n"
html = html + "<h1>Daily Summary</h1>\n"
html = html + "<p>Generated: " + FORMAT(NOW(), "YYYY-MM-DD HH:mm") + "</p>\n"
html = html + "<ul>\n"
FOR EACH item IN summary_items
html = html + "<li>" + item + "</li>\n"
NEXT
html = html + "</ul>\n"
html = html + "</body></html>"
WRITE html TO "reports/daily-summary.html"
```
---
## Writing Different Formats
### Plain Text
```basic
WRITE "Hello, World!" TO "greeting.txt"
```
### Markdown
```basic
doc = "# Title\n\n## Section 1\n\nContent here.\n"
WRITE doc TO "document.md"
```
### JSON
```basic
json_text = '{"name": "Test", "value": 123}'
WRITE json_text TO "data.json"
```
### CSV (Manual)
```basic
csv = "name,email,phone\n"
csv = csv + "Alice,alice@example.com,555-0100\n"
csv = csv + "Bob,bob@example.com,555-0101\n"
WRITE csv TO "contacts.csv"
```
### CSV (From Table)
```basic
' Write query results as CSV
data = FIND "contacts" WHERE active = true
WRITE data TO "contacts.csv" AS TABLE
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
WRITE content TO "protected/file.txt"
IF ERROR THEN
PRINT "Write failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't save the file. Please try again."
ELSE
TALK "File saved successfully!"
END IF
```
---
## File Path Rules
| Path | Description |
|------|-------------|
| `file.txt` | Root of bot's storage |
| `folder/file.txt` | Subdirectory (created if needed) |
| `folder/sub/file.txt` | Nested subdirectory |
| `../file.txt` | **Not allowed** — no parent traversal |
| `/absolute/path` | **Not allowed** — paths are always relative |
---
## Overwrite vs Append
| Mode | Behavior |
|------|----------|
| Default | Overwrites existing file completely |
| `APPEND` | Adds content to end of existing file |
```basic
' Overwrite (default)
WRITE "New content" TO "file.txt"
' Append
WRITE "Additional content\n" TO "file.txt" APPEND
```
---
## Configuration
Configure storage settings in `config.csv`:
```csv
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
drive-write-timeout,60
drive-max-file-size,52428800
```
---
## Implementation Notes
- Implemented in Rust under `src/file/mod.rs`
- Automatically creates parent directories
- Uses UTF-8 encoding for text files
- Maximum file size: 50MB (configurable)
- Atomic writes to prevent corruption
- Returns confirmation on success
---
## Related Keywords
- [READ](keyword-read.md) — Load content from files
- [LIST](keyword-list.md) — List files in a directory
- [DELETE FILE](keyword-delete-file.md) — Remove files
- [COPY](keyword-copy.md) — Copy files
- [MOVE](keyword-move.md) — Move or rename files
- [UPLOAD](keyword-upload.md) — Upload files to storage
---
## Summary
`WRITE` is the primary keyword for creating and saving files. Use it to generate reports, export data, create backups, build logs, and persist any content. Combined with `AS TABLE` for CSV exports and `APPEND` for log files, it provides flexible file creation capabilities for any bot workflow.

View file

@ -1 +1,412 @@
# Data Operations
This section covers keywords for working with structured data in databases, spreadsheets, and in-memory collections. These keywords enable bots to query, transform, and persist data across various storage backends.
---
## Overview
General Bots provides a complete set of data operation keywords:
| Keyword | Purpose |
|---------|---------|
| [SAVE](keyword-save.md) | Persist data to storage |
| [INSERT](keyword-insert.md) | Add new records to tables |
| [UPDATE](keyword-update.md) | Modify existing records |
| [DELETE](keyword-delete.md) | Remove records from tables |
| [MERGE](keyword-merge.md) | Upsert (insert or update) records |
| [FILL](keyword-fill.md) | Populate templates with data |
| [MAP](keyword-map.md) | Transform collections |
| [FILTER](keyword-filter.md) | Select matching items |
| [AGGREGATE](keyword-aggregate.md) | Sum, count, average operations |
| [JOIN](keyword-join.md) | Combine related datasets |
| [PIVOT](keyword-pivot.md) | Reshape data tables |
| [GROUP BY](keyword-group-by.md) | Group records by field |
---
## Quick Examples
### Database Operations
```basic
' Insert a new record
INSERT INTO "customers" WITH
name = "John Doe",
email = "john@example.com",
created_at = NOW()
' Update existing records
UPDATE "customers" SET status = "active" WHERE email = "john@example.com"
' Delete records
DELETE FROM "customers" WHERE status = "inactive" AND last_login < "2024-01-01"
' Merge (upsert) - insert or update based on key
MERGE INTO "products" ON sku = "SKU-001" WITH
sku = "SKU-001",
name = "Widget",
price = 29.99,
stock = 100
```
### Collection Transformations
```basic
' Map - transform each item
prices = [10, 20, 30, 40]
with_tax = MAP prices WITH item * 1.1
' Result: [11, 22, 33, 44]
' Filter - select matching items
orders = FIND "orders"
large_orders = FILTER orders WHERE total > 100
' Returns only orders with total > 100
' Aggregate - calculate summaries
total_sales = AGGREGATE orders SUM total
order_count = AGGREGATE orders COUNT
avg_order = AGGREGATE orders AVERAGE total
```
### Data Analysis
```basic
' Group by category
sales_by_category = GROUP BY "sales" ON category
FOR EACH group IN sales_by_category
TALK group.category + ": $" + group.total
NEXT
' Join related tables
order_details = JOIN "orders" WITH "customers" ON customer_id = id
FOR EACH detail IN order_details
TALK detail.customer_name + " ordered " + detail.product
NEXT
' Pivot data for reports
monthly_pivot = PIVOT "sales" ROWS month COLUMNS product VALUES SUM(amount)
```
---
## Data Sources
### Supported Backends
| Backend | Use Case | Configuration |
|---------|----------|---------------|
| PostgreSQL | Primary database | `database-url` in config.csv |
| SQLite | Local/embedded | `database-provider,sqlite` |
| In-memory | Temporary data | Default for collections |
| CSV files | Import/export | Via READ/WRITE AS TABLE |
| Excel | Spreadsheet data | Via READ AS TABLE |
### Connection Configuration
```csv
name,value
database-provider,postgres
database-url,postgres://user:pass@localhost/botdb
database-pool-size,10
database-timeout,30
```
### Multiple Connections
```basic
' Use default connection
customers = FIND "customers"
' Use named connection
legacy_data = FIND "orders" ON "legacy_db"
warehouse_stock = FIND "inventory" ON "warehouse_db"
```
---
## Common Patterns
### CRUD Operations
```basic
' CREATE
customer_id = INSERT INTO "customers" WITH
name = customer_name,
email = customer_email,
phone = customer_phone
TALK "Customer created with ID: " + customer_id
' READ
customer = FIND "customers" WHERE id = customer_id
TALK "Found: " + customer.name
' UPDATE
UPDATE "customers" SET
last_contact = NOW(),
contact_count = contact_count + 1
WHERE id = customer_id
' DELETE
DELETE FROM "customers" WHERE id = customer_id AND confirmed = true
```
### Batch Operations
```basic
' Insert multiple records from data source
new_orders = READ "imports/orders.csv" AS TABLE
FOR EACH order IN new_orders
INSERT INTO "orders" WITH
product = order.product,
quantity = order.quantity,
price = order.price
NEXT
' Bulk update
UPDATE "products" SET on_sale = true WHERE category = "electronics"
```
### Data Transformation Pipeline
```basic
' Load raw data
raw_sales = READ "imports/sales-data.csv" AS TABLE
' Clean and transform
cleaned = FILTER raw_sales WHERE amount > 0 AND date IS NOT NULL
' Enrich with calculations
enriched = MAP cleaned WITH
tax = item.amount * 0.1,
total = item.amount * 1.1,
quarter = QUARTER(item.date)
' Aggregate for reporting
quarterly_totals = GROUP BY enriched ON quarter
summary = AGGREGATE quarterly_totals SUM total
' Save results
WRITE summary TO "reports/quarterly-summary.csv" AS TABLE
INSERT INTO "sales_reports" VALUES summary
```
### Lookup and Reference
```basic
' Simple lookup
product = FIND "products" WHERE sku = user_sku
IF product THEN
TALK "Price: $" + product.price
ELSE
TALK "Product not found"
END IF
' Lookup with join
order_with_customer = FIND "orders"
JOIN "customers" ON orders.customer_id = customers.id
WHERE orders.id = order_id
TALK "Order for " + order_with_customer.customer_name
```
---
## Query Syntax
### WHERE Clauses
```basic
' Equality
FIND "users" WHERE status = "active"
' Comparison
FIND "orders" WHERE total > 100
FIND "products" WHERE stock <= 10
' Multiple conditions
FIND "customers" WHERE country = "US" AND created_at > "2024-01-01"
FIND "items" WHERE category = "electronics" OR category = "accessories"
' NULL checks
FIND "leads" WHERE assigned_to IS NULL
FIND "orders" WHERE shipped_at IS NOT NULL
' Pattern matching
FIND "products" WHERE name LIKE "%widget%"
' IN lists
FIND "orders" WHERE status IN ["pending", "processing", "shipped"]
```
### ORDER BY
```basic
' Single column sort
FIND "products" ORDER BY price ASC
' Multiple column sort
FIND "orders" ORDER BY priority DESC, created_at ASC
' With limit
recent_orders = FIND "orders" ORDER BY created_at DESC LIMIT 10
```
### Aggregations
```basic
' Count records
total_customers = AGGREGATE "customers" COUNT
' Sum values
total_revenue = AGGREGATE "orders" SUM total
' Average
avg_order_value = AGGREGATE "orders" AVERAGE total
' Min/Max
cheapest = AGGREGATE "products" MIN price
most_expensive = AGGREGATE "products" MAX price
' With grouping
sales_by_region = AGGREGATE "sales" SUM amount GROUP BY region
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
result = INSERT INTO "orders" VALUES order_data
IF ERROR THEN
PRINT "Database error: " + ERROR_MESSAGE
IF INSTR(ERROR_MESSAGE, "duplicate") > 0 THEN
TALK "This order already exists."
ELSE IF INSTR(ERROR_MESSAGE, "constraint") > 0 THEN
TALK "Invalid data. Please check all fields."
ELSE
TALK "Sorry, I couldn't save your order. Please try again."
END IF
ELSE
TALK "Order saved successfully!"
END IF
```
### Transaction Handling
```basic
' Start transaction
BEGIN TRANSACTION
' Multiple operations
INSERT INTO "orders" VALUES order_data
UPDATE "inventory" SET stock = stock - quantity WHERE product_id = product_id
INSERT INTO "order_items" VALUES items
' Commit if all succeeded
IF NOT ERROR THEN
COMMIT
TALK "Order completed!"
ELSE
ROLLBACK
TALK "Order failed. All changes reverted."
END IF
```
---
## Performance Tips
### Use Indexes
Ensure database tables have appropriate indexes for frequently queried columns:
```sql
-- In database setup
CREATE INDEX idx_orders_customer ON orders(customer_id);
CREATE INDEX idx_orders_date ON orders(created_at);
CREATE INDEX idx_products_sku ON products(sku);
```
### Limit Results
```basic
' Avoid loading entire tables
' Bad:
all_orders = FIND "orders"
' Good:
recent_orders = FIND "orders" WHERE created_at > date_limit LIMIT 100
```
### Batch Operations
```basic
' Process large datasets in batches
page = 0
batch_size = 100
WHILE true
batch = FIND "records" LIMIT batch_size OFFSET page * batch_size
IF LEN(batch) = 0 THEN
EXIT WHILE
END IF
FOR EACH record IN batch
' Process record
NEXT
page = page + 1
WEND
```
---
## Configuration
Configure data operations in `config.csv`:
```csv
name,value
database-provider,postgres
database-url,postgres://localhost/botdb
database-pool-size,10
database-timeout,30
database-log-queries,false
database-max-rows,10000
```
---
## Security Considerations
1. **Parameterized queries** — All keywords use parameterized queries to prevent SQL injection
2. **Row limits** — Default limit on returned rows prevents memory exhaustion
3. **Access control** — Bots can only access their own data by default
4. **Audit logging** — All data modifications logged for compliance
5. **Encryption** — Sensitive data encrypted at rest
---
## See Also
- [SAVE](keyword-save.md) — Persist data
- [INSERT](keyword-insert.md) — Add records
- [UPDATE](keyword-update.md) — Modify records
- [DELETE](keyword-delete.md) — Remove records
- [MERGE](keyword-merge.md) — Upsert operations
- [FILL](keyword-fill.md) — Template population
- [MAP](keyword-map.md) — Transform collections
- [FILTER](keyword-filter.md) — Select items
- [AGGREGATE](keyword-aggregate.md) — Summaries
- [JOIN](keyword-join.md) — Combine datasets
- [PIVOT](keyword-pivot.md) — Reshape data
- [GROUP BY](keyword-group-by.md) — Group records
- [TABLE](keyword-table.md) — Create tables

View file

@ -1 +1,362 @@
# File Operations
This section covers keywords for working with files in the bot's storage system. These keywords enable bots to read, write, copy, move, and manage files stored in the bot's drive bucket.
---
## Overview
General Bots provides a complete set of file operation keywords:
| Keyword | Purpose |
|---------|---------|
| [READ](keyword-read.md) | Load content from files |
| [WRITE](keyword-write.md) | Save content to files |
| [DELETE FILE](keyword-delete-file.md) | Remove files |
| [COPY](keyword-copy.md) | Copy files within storage |
| [MOVE](keyword-move.md) | Move or rename files |
| [LIST](keyword-list.md) | List files in a directory |
| [COMPRESS](keyword-compress.md) | Create ZIP archives |
| [EXTRACT](keyword-extract.md) | Extract archive contents |
| [UPLOAD](keyword-upload.md) | Upload files from URLs or users |
| [DOWNLOAD](keyword-download.md) | Send files to users |
| [GENERATE PDF](keyword-generate-pdf.md) | Create PDF documents |
| [MERGE PDF](keyword-merge-pdf.md) | Combine multiple PDFs |
---
## Quick Examples
### Basic File Operations
```basic
' Read a file
content = READ "documents/report.txt"
TALK content
' Write to a file
WRITE "Hello, World!" TO "greeting.txt"
' Append to a file
WRITE "New line\n" TO "log.txt" APPEND
' Delete a file
DELETE FILE "temp/old-file.txt"
' Copy a file
COPY "templates/form.docx" TO "user-forms/form-copy.docx"
' Move/rename a file
MOVE "inbox/message.txt" TO "archive/message.txt"
' List files in a directory
files = LIST "documents/"
FOR EACH file IN files
TALK file.name + " (" + file.size + " bytes)"
NEXT
```
### Working with CSV Data
```basic
' Read CSV as structured data
customers = READ "data/customers.csv" AS TABLE
FOR EACH customer IN customers
TALK customer.name + ": " + customer.email
NEXT
' Write data as CSV from database query
orders = FIND "orders" WHERE status = "pending" LIMIT 100
WRITE orders TO "exports/orders.csv" AS TABLE
```
### File Upload and Download
```basic
' Accept file from user
TALK "Please send me a document."
HEAR user_file
result = UPLOAD user_file TO "uploads/" + user.id
TALK "File saved: " + result.filename
' Send file to user
DOWNLOAD "reports/summary.pdf" AS "Monthly Summary.pdf"
TALK "Here's your report!"
```
### PDF Operations
```basic
' Generate PDF from template
GENERATE PDF "templates/invoice.html" TO "invoices/inv-001.pdf" WITH
customer = "John Doe",
amount = 150.00,
date = FORMAT(NOW(), "YYYY-MM-DD")
' Merge multiple PDFs
MERGE PDF ["cover.pdf", "report.pdf", "appendix.pdf"] TO "complete-report.pdf"
```
### Archive Operations
```basic
' Create a ZIP archive
COMPRESS ["doc1.pdf", "doc2.pdf", "images/"] TO "package.zip"
' Extract archive contents
EXTRACT "uploaded.zip" TO "extracted/"
```
---
## Storage Structure
Files are stored in the bot's drive bucket with the following structure:
```
bot-name/
├── documents/
├── templates/
├── exports/
├── uploads/
│ └── user-123/
├── reports/
├── temp/
└── archives/
```
### Path Rules
| Path | Description |
|------|-------------|
| `file.txt` | Root of bot's storage |
| `folder/file.txt` | Subdirectory |
| `folder/sub/file.txt` | Nested subdirectory |
| `../file.txt` | **Not allowed** — no parent traversal |
| `/absolute/path` | **Not allowed** — paths are always relative |
```basic
' Valid paths
content = READ "documents/report.pdf"
WRITE data TO "exports/2025/january.csv"
' Invalid paths (will error)
' READ "../other-bot/file.txt" ' Parent traversal blocked
' READ "/etc/passwd" ' Absolute paths blocked
```
---
## Supported File Types
### Text Files
| Extension | Description |
|-----------|-------------|
| `.txt` | Plain text |
| `.md` | Markdown |
| `.json` | JSON data |
| `.csv` | Comma-separated values |
| `.xml` | XML data |
| `.html` | HTML documents |
| `.yaml` | YAML configuration |
### Documents
| Extension | Description | Auto-Extract |
|-----------|-------------|--------------|
| `.pdf` | PDF documents | ✓ Text extracted |
| `.docx` | Word documents | ✓ Text extracted |
| `.xlsx` | Excel spreadsheets | ✓ As table data |
| `.pptx` | PowerPoint | ✓ Text from slides |
### Media
| Extension | Description |
|-----------|-------------|
| `.jpg`, `.png`, `.gif` | Images |
| `.mp3`, `.wav` | Audio |
| `.mp4`, `.mov` | Video |
### Archives
| Extension | Description |
|-----------|-------------|
| `.zip` | ZIP archives |
| `.tar.gz` | Compressed tarballs |
---
## Common Patterns
### Template Processing
```basic
' Load template and fill placeholders
template = READ "templates/welcome-email.html"
email_body = REPLACE(template, "{{name}}", customer.name)
email_body = REPLACE(email_body, "{{date}}", FORMAT(NOW(), "MMMM DD, YYYY"))
email_body = REPLACE(email_body, "{{order_id}}", order.id)
SEND MAIL customer.email, "Welcome!", email_body
```
### Data Export
```basic
' Export query results to CSV
results = FIND "orders" WHERE status = "completed" AND date > "2025-01-01"
WRITE results TO "exports/completed-orders.csv" AS TABLE
' Generate download link
link = DOWNLOAD "exports/completed-orders.csv" AS LINK
TALK "Download your export: " + link
```
### Backup and Archive
```basic
' Create dated backup
backup_name = "backups/data-" + FORMAT(NOW(), "YYYYMMDD") + ".json"
data = GET BOT MEMORY "important_data"
WRITE JSON_STRINGIFY(data) TO backup_name
' Archive old files
old_files = LIST "reports/2024/"
COMPRESS old_files TO "archives/reports-2024.zip"
' Clean up originals
FOR EACH file IN old_files
DELETE FILE file.path
NEXT
```
### File Validation
```basic
' Check file exists before processing
files = LIST "uploads/" + user.id + "/"
document_found = false
FOR EACH file IN files
IF file.name = expected_filename THEN
document_found = true
EXIT FOR
END IF
NEXT
IF document_found THEN
content = READ "uploads/" + user.id + "/" + expected_filename
' Process content...
ELSE
TALK "I couldn't find that document. Please upload it again."
END IF
```
### Organize Uploads
```basic
' Organize uploaded files by type
HEAR uploaded_file
file_type = uploaded_file.mime_type
IF INSTR(file_type, "image") > 0 THEN
folder = "images"
ELSE IF INSTR(file_type, "pdf") > 0 THEN
folder = "documents"
ELSE IF INSTR(file_type, "spreadsheet") > 0 OR INSTR(file_type, "excel") > 0 THEN
folder = "spreadsheets"
ELSE
folder = "other"
END IF
result = UPLOAD uploaded_file TO folder + "/" + FORMAT(NOW(), "YYYY/MM")
TALK "File saved to " + folder + "!"
```
---
## Error Handling
```basic
ON ERROR RESUME NEXT
content = READ "documents/important.pdf"
IF ERROR THEN
PRINT "File error: " + ERROR_MESSAGE
TALK "Sorry, I couldn't access that file. It may have been moved or deleted."
ELSE
TALK "File loaded successfully!"
' Process content...
END IF
```
### Common Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `FILE_NOT_FOUND` | File doesn't exist | Check path, list directory first |
| `PERMISSION_DENIED` | Access blocked | Check file permissions |
| `PATH_TRAVERSAL` | Invalid path with `..` | Use only relative paths |
| `FILE_TOO_LARGE` | Exceeds size limit | Increase limit or split file |
| `INVALID_FORMAT` | Unsupported file type | Convert or use different format |
---
## Configuration
Configure file operations in `config.csv`:
```csv
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
drive-read-timeout,30
drive-write-timeout,60
drive-max-file-size,52428800
drive-allowed-extensions,pdf,docx,xlsx,jpg,png,csv,json
```
---
## Size Limits
| Operation | Default Limit | Configurable |
|-----------|---------------|--------------|
| Read file | 50 MB | Yes |
| Write file | 50 MB | Yes |
| Upload file | 50 MB | Yes |
| Total storage | 10 GB per bot | Yes |
| Files per directory | 10,000 | Yes |
---
## Security Considerations
1. **Path validation** — All paths are sanitized to prevent directory traversal
2. **File type restrictions** — Executable files blocked by default
3. **Size limits** — Prevents storage exhaustion attacks
4. **Access control** — Files isolated per bot
5. **Malware scanning** — Uploaded files scanned before storage
---
## See Also
- [READ](keyword-read.md) — Load file content
- [WRITE](keyword-write.md) — Save content to files
- [DELETE FILE](keyword-delete-file.md) — Remove files
- [COPY](keyword-copy.md) — Copy files
- [MOVE](keyword-move.md) — Move/rename files
- [LIST](keyword-list.md) — List directory contents
- [COMPRESS](keyword-compress.md) — Create archives
- [EXTRACT](keyword-extract.md) — Extract archives
- [UPLOAD](keyword-upload.md) — Upload files
- [DOWNLOAD](keyword-download.md) — Send files to users
- [GENERATE PDF](keyword-generate-pdf.md) — Create PDFs
- [MERGE PDF](keyword-merge-pdf.md) — Combine PDFs

View file

@ -1 +1,302 @@
# HTTP & API Operations
This section covers keywords for making HTTP requests and integrating with external APIs. These keywords enable bots to communicate with REST APIs, GraphQL endpoints, SOAP services, and any HTTP-based web service.
---
## Overview
General Bots provides a complete set of HTTP keywords for API integration:
| Keyword | HTTP Method | Purpose |
|---------|-------------|---------|
| [GET](keyword-get.md) | GET | Retrieve data from URLs or files |
| [POST](keyword-post.md) | POST | Create resources, submit data |
| [PUT](keyword-put.md) | PUT | Replace/update entire resources |
| [PATCH](keyword-patch.md) | PATCH | Partial resource updates |
| [DELETE HTTP](keyword-delete-http.md) | DELETE | Remove resources |
| [SET HEADER](keyword-set-header.md) | — | Set request headers |
| [GRAPHQL](keyword-graphql.md) | POST | GraphQL queries and mutations |
| [SOAP](keyword-soap.md) | POST | SOAP/XML web services |
---
## Quick Examples
### REST API Call
```basic
' GET request
data = GET "https://api.example.com/users/123"
TALK "User name: " + data.name
' POST request
result = POST "https://api.example.com/users" WITH
name = "John",
email = "john@example.com"
TALK "Created user ID: " + result.id
' PUT request (full update)
PUT "https://api.example.com/users/123" WITH
name = "John Doe",
email = "johndoe@example.com",
status = "active"
' PATCH request (partial update)
PATCH "https://api.example.com/users/123" WITH status = "inactive"
' DELETE request
DELETE HTTP "https://api.example.com/users/123"
```
### With Authentication
```basic
' Set authorization header
SET HEADER "Authorization", "Bearer " + api_token
SET HEADER "Content-Type", "application/json"
' Make authenticated request
result = GET "https://api.example.com/protected/resource"
' Clear headers when done
SET HEADER "Authorization", ""
```
### GraphQL Query
```basic
query = '
query GetUser($id: ID!) {
user(id: $id) {
name
email
orders { id total }
}
}
'
result = GRAPHQL "https://api.example.com/graphql", query WITH id = "123"
TALK "User: " + result.data.user.name
```
### SOAP Service
```basic
' Call a SOAP web service
request = '
<GetWeather xmlns="http://weather.example.com">
<City>New York</City>
</GetWeather>
'
result = SOAP "https://weather.example.com/service", "GetWeather", request
TALK "Temperature: " + result.Temperature
```
---
## Common Patterns
### API Client Setup
```basic
' Configure API base URL and authentication
api_base = "https://api.myservice.com/v1"
SET HEADER "Authorization", "Bearer " + GET BOT MEMORY "api_token"
SET HEADER "X-API-Version", "2025-01"
' Helper function pattern
' GET users
users = GET api_base + "/users"
' GET specific user
user = GET api_base + "/users/" + user_id
' CREATE user
new_user = POST api_base + "/users", user_data
' UPDATE user
PUT api_base + "/users/" + user_id, updated_data
' DELETE user
DELETE HTTP api_base + "/users/" + user_id
```
### Error Handling
```basic
ON ERROR RESUME NEXT
result = POST "https://api.example.com/orders", order_data
IF ERROR THEN
PRINT "API Error: " + ERROR_MESSAGE
TALK "Sorry, I couldn't process your order. Please try again."
ELSE IF result.error THEN
TALK "Order failed: " + result.error.message
ELSE
TALK "Order placed! ID: " + result.id
END IF
```
### Retry Logic
```basic
max_retries = 3
retry_count = 0
success = false
WHILE retry_count < max_retries AND NOT success
ON ERROR RESUME NEXT
result = POST api_url, data
IF NOT ERROR AND NOT result.error THEN
success = true
ELSE
retry_count = retry_count + 1
WAIT 2 ' Wait 2 seconds before retry
END IF
WEND
IF success THEN
TALK "Request successful!"
ELSE
TALK "Request failed after " + max_retries + " attempts."
END IF
```
### Pagination
```basic
' Fetch all pages of results
all_items = []
page = 1
has_more = true
WHILE has_more
result = GET api_base + "/items?page=" + page + "&limit=100"
FOR EACH item IN result.items
all_items = APPEND(all_items, item)
NEXT
has_more = result.has_more
page = page + 1
WEND
TALK "Fetched " + LEN(all_items) + " total items"
```
---
## Request Headers
Common headers you might need to set:
| Header | Purpose | Example |
|--------|---------|---------|
| `Authorization` | API authentication | `Bearer token123` |
| `Content-Type` | Request body format | `application/json` |
| `Accept` | Response format preference | `application/json` |
| `X-API-Key` | API key authentication | `key_abc123` |
| `X-Request-ID` | Request tracking | `req-uuid-here` |
```basic
SET HEADER "Authorization", "Bearer " + token
SET HEADER "Content-Type", "application/json"
SET HEADER "Accept", "application/json"
SET HEADER "X-Request-ID", GUID()
```
---
## Response Handling
### JSON Responses
Most APIs return JSON, automatically parsed:
```basic
result = GET "https://api.example.com/user"
' Access properties directly
TALK "Name: " + result.name
TALK "Email: " + result.email
' Access nested objects
TALK "City: " + result.address.city
' Access arrays
FOR EACH order IN result.orders
TALK "Order: " + order.id
NEXT
```
### Check Response Status
```basic
result = POST api_url, data
IF result.status = 201 THEN
TALK "Resource created!"
ELSE IF result.status = 400 THEN
TALK "Bad request: " + result.error.message
ELSE IF result.status = 401 THEN
TALK "Authentication failed. Please log in again."
ELSE IF result.status = 404 THEN
TALK "Resource not found."
ELSE IF result.status >= 500 THEN
TALK "Server error. Please try again later."
END IF
```
---
## Configuration
Configure HTTP settings in `config.csv`:
```csv
name,value
http-timeout,30
http-retry-count,3
http-retry-delay,1000
http-base-url,https://api.mycompany.com
http-user-agent,GeneralBots/1.0
http-max-redirects,10
http-verify-ssl,true
```
---
## Security Best Practices
1. **Store credentials securely** — Use Vault or environment variables for API keys
2. **Use HTTPS** — Never send credentials over unencrypted connections
3. **Validate responses** — Check status codes and handle errors
4. **Set timeouts** — Prevent hanging on slow APIs
5. **Rate limit** — Respect API rate limits to avoid being blocked
6. **Log requests** — Enable logging for debugging without exposing secrets
```basic
' Good: Token from secure storage
token = GET BOT MEMORY "api_token"
SET HEADER "Authorization", "Bearer " + token
' Bad: Hardcoded token
' SET HEADER "Authorization", "Bearer sk-abc123" ' NEVER DO THIS
```
---
## See Also
- [GET](keyword-get.md) — Retrieve data
- [POST](keyword-post.md) — Create resources
- [PUT](keyword-put.md) — Update resources
- [PATCH](keyword-patch.md) — Partial updates
- [DELETE HTTP](keyword-delete-http.md) — Delete resources
- [SET HEADER](keyword-set-header.md) — Set request headers
- [GRAPHQL](keyword-graphql.md) — GraphQL operations
- [SOAP](keyword-soap.md) — SOAP web services

View file

@ -401,7 +401,8 @@ VAULT_TOKEN=hvs.your-token-here
# Directory for user auth (Zitadel)
DIRECTORY_URL=https://localhost:8080
DIRECTORY_PROJECT_ID=your-project-id
DIRECTORY_CLIENT_ID=your-client-id
DIRECTORY_CLIENT_SECRET=your-client-secret
# All other secrets fetched from Vault at runtime
```

View file

@ -246,7 +246,8 @@ If you're currently using environment variables:
# .env - TOO MANY SECRETS!
DATABASE_URL=postgres://user:password@localhost/db
DIRECTORY_URL=https://localhost:8080
DIRECTORY_PROJECT_ID=12345
DIRECTORY_CLIENT_ID=your-client-id
DIRECTORY_CLIENT_SECRET=your-client-secret
REDIS_PASSWORD=redis-secret
OPENAI_API_KEY=sk-xxxxx
ANTHROPIC_API_KEY=sk-ant-xxxxx