``` Add KB Statistics keywords and infrastructure documentation

- Add KB Statistics keywords for Qdrant vector database monitoring: KB
  STATISTICS, KB COLLECTION STATS, KB DOCUMENTS COUNT, KB DOCUMENTS
  ADDED SINCE, KB LIST COLLECTIONS, KB STORAGE SIZE

- Add comprehensive infrastructure documentation:
  - Scaling and load balancing with LXC containers
  - Infrastructure design with encryption, sharding strategies
  - Observ
This commit is contained in:
Rodrigo Rodriguez (Pragmatismo) 2025-11-30 16:25:51 -03:00
parent 48c1ae0b51
commit 5d21bba1e1
70 changed files with 6988 additions and 2541 deletions

View file

@ -92,6 +92,13 @@
- [SWITCH](./chapter-06-gbdialog/keyword-switch.md)
- [WEBHOOK](./chapter-06-gbdialog/keyword-webhook.md)
- [TABLE](./chapter-06-gbdialog/keyword-table.md)
- [KB Statistics Keywords](./chapter-06-gbdialog/keywords-kb-statistics.md)
- [KB STATISTICS](./chapter-06-gbdialog/keyword-kb-statistics.md)
- [KB COLLECTION STATS](./chapter-06-gbdialog/keyword-kb-collection-stats.md)
- [KB DOCUMENTS COUNT](./chapter-06-gbdialog/keyword-kb-documents-count.md)
- [KB DOCUMENTS ADDED SINCE](./chapter-06-gbdialog/keyword-kb-documents-added-since.md)
- [KB LIST COLLECTIONS](./chapter-06-gbdialog/keyword-kb-list-collections.md)
- [KB STORAGE SIZE](./chapter-06-gbdialog/keyword-kb-storage-size.md)
- [HTTP & API Operations](./chapter-06-gbdialog/keywords-http.md)
- [POST](./chapter-06-gbdialog/keyword-post.md)
- [PUT](./chapter-06-gbdialog/keyword-put.md)
@ -133,6 +140,9 @@
- [Architecture Overview](./chapter-07-gbapp/architecture.md)
- [Building from Source](./chapter-07-gbapp/building.md)
- [Container Deployment (LXC)](./chapter-07-gbapp/containers.md)
- [Scaling and Load Balancing](./chapter-07-gbapp/scaling.md)
- [Infrastructure Design](./chapter-07-gbapp/infrastructure.md)
- [Observability](./chapter-07-gbapp/observability.md)
- [Philosophy](./chapter-07-gbapp/philosophy.md)
- [Example gbapp](./chapter-07-gbapp/example-gbapp.md)
- [Module Structure](./chapter-07-gbapp/crates.md)
@ -148,6 +158,7 @@
- [LLM Configuration](./chapter-08-config/llm-config.md)
- [Context Configuration](./chapter-08-config/context-config.md)
- [Drive Integration](./chapter-08-config/drive.md)
- [Secrets Management](./chapter-08-config/secrets-management.md)
# Part IX - Tools and Integration

View file

@ -1 +1,437 @@
# LLM Providers
General Bots supports multiple Large Language Model (LLM) providers, both cloud-based services and local deployments. This guide helps you choose the right provider for your use case.
## Overview
LLMs are the intelligence behind General Bots' conversational capabilities. You can configure:
- **Cloud Providers** - External APIs (OpenAI, Anthropic, Groq, etc.)
- **Local Models** - Self-hosted models via llama.cpp
- **Hybrid** - Use local for simple tasks, cloud for complex reasoning
## Cloud Providers
### OpenAI (GPT Series)
The most widely known LLM provider, offering GPT-4 and GPT-4o models.
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| GPT-4o | 128K | General purpose, vision | Fast |
| GPT-4o-mini | 128K | Cost-effective tasks | Very Fast |
| GPT-4 Turbo | 128K | Complex reasoning | Medium |
| o1-preview | 128K | Advanced reasoning, math | Slow |
| o1-mini | 128K | Code, logic tasks | Medium |
**Configuration:**
```csv
llm-provider,openai
llm-api-key,sk-xxxxxxxxxxxxxxxxxxxxxxxx
llm-model,gpt-4o
```
**Strengths:**
- Excellent general knowledge
- Strong code generation
- Good instruction following
- Vision capabilities (GPT-4o)
**Considerations:**
- API costs can add up
- Data sent to external servers
- Rate limits apply
### Anthropic (Claude Series)
Known for safety, helpfulness, and large context windows.
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| Claude 3.5 Sonnet | 200K | Best balance of capability/speed | Fast |
| Claude 3.5 Haiku | 200K | Quick, everyday tasks | Very Fast |
| Claude 3 Opus | 200K | Most capable, complex tasks | Slow |
**Configuration:**
```csv
llm-provider,anthropic
llm-api-key,sk-ant-xxxxxxxxxxxxxxxx
llm-model,claude-3-5-sonnet-20241022
```
**Strengths:**
- Largest context window (200K tokens)
- Excellent at following complex instructions
- Strong coding abilities
- Better at refusing harmful requests
**Considerations:**
- Premium pricing
- No vision in all models
- Newer provider, smaller ecosystem
### Groq
Ultra-fast inference using custom LPU hardware. Offers open-source models at high speed.
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| Llama 3.3 70B | 128K | Complex reasoning | Very Fast |
| Llama 3.1 8B | 128K | Quick responses | Extremely Fast |
| Mixtral 8x7B | 32K | Balanced performance | Very Fast |
| Gemma 2 9B | 8K | Lightweight tasks | Extremely Fast |
**Configuration:**
```csv
llm-provider,groq
llm-api-key,gsk_xxxxxxxxxxxxxxxx
llm-model,llama-3.3-70b-versatile
```
**Strengths:**
- Fastest inference speeds (500+ tokens/sec)
- Competitive pricing
- Open-source models
- Great for real-time applications
**Considerations:**
- Limited model selection
- Rate limits on free tier
- Models may be less capable than GPT-4/Claude
### Google (Gemini Series)
Google's multimodal AI models with strong reasoning capabilities.
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| Gemini 1.5 Pro | 2M | Extremely long documents | Medium |
| Gemini 1.5 Flash | 1M | Fast multimodal | Fast |
| Gemini 2.0 Flash | 1M | Latest capabilities | Fast |
**Configuration:**
```csv
llm-provider,google
llm-api-key,AIzaxxxxxxxxxxxxxxxx
llm-model,gemini-1.5-pro
```
**Strengths:**
- Largest context window (2M tokens)
- Native multimodal (text, image, video, audio)
- Strong at structured data
- Good coding abilities
**Considerations:**
- Newer ecosystem
- Some features region-limited
- API changes more frequently
### Mistral AI
European AI company offering efficient, open-weight models.
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| Mistral Large | 128K | Complex tasks | Medium |
| Mistral Medium | 32K | Balanced performance | Fast |
| Mistral Small | 32K | Cost-effective | Very Fast |
| Codestral | 32K | Code generation | Fast |
**Configuration:**
```csv
llm-provider,mistral
llm-api-key,xxxxxxxxxxxxxxxx
llm-model,mistral-large-latest
```
**Strengths:**
- European data sovereignty (GDPR)
- Excellent code generation (Codestral)
- Open-weight models available
- Competitive pricing
**Considerations:**
- Smaller context than competitors
- Less brand recognition
- Fewer fine-tuning options
### DeepSeek
Chinese AI company known for efficient, capable models.
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| DeepSeek-V3 | 128K | General purpose | Fast |
| DeepSeek-R1 | 128K | Reasoning, math | Medium |
| DeepSeek-Coder | 128K | Programming | Fast |
**Configuration:**
```csv
llm-provider,deepseek
llm-api-key,sk-xxxxxxxxxxxxxxxx
llm-model,deepseek-chat
llm-server-url,https://api.deepseek.com
```
**Strengths:**
- Extremely cost-effective
- Strong reasoning (R1 model)
- Excellent code generation
- Open-weight versions available
**Considerations:**
- Data processed in China
- Newer provider
- May have content restrictions
## Local Models
Run models on your own hardware for privacy, cost control, and offline operation.
### Setting Up Local LLM
General Bots uses **llama.cpp** server for local inference:
```csv
llm-provider,local
llm-server-url,https://localhost:8081
llm-model,DeepSeek-R1-Distill-Qwen-1.5B
```
### Recommended Local Models
#### For High-End GPU (24GB+ VRAM)
| Model | Size | VRAM | Quality |
|-------|------|------|---------|
| GPT-OSS 120B Q4 | 70GB | 48GB+ | Excellent |
| Llama 3.1 70B Q4 | 40GB | 48GB+ | Excellent |
| DeepSeek-R1 32B Q4 | 20GB | 24GB | Very Good |
| Qwen 2.5 72B Q4 | 42GB | 48GB+ | Excellent |
#### For Mid-Range GPU (12-16GB VRAM)
| Model | Size | VRAM | Quality |
|-------|------|------|---------|
| GPT-OSS 20B F16 | 40GB | 16GB | Very Good |
| Llama 3.1 8B Q8 | 9GB | 12GB | Good |
| DeepSeek-R1-Distill 14B Q4 | 8GB | 12GB | Good |
| Mistral Nemo 12B Q4 | 7GB | 10GB | Good |
#### For Small GPU or CPU (8GB VRAM or less)
| Model | Size | VRAM | Quality |
|-------|------|------|---------|
| DeepSeek-R1-Distill 1.5B Q4 | 1GB | 4GB | Basic |
| Phi-3 Mini 3.8B Q4 | 2.5GB | 6GB | Acceptable |
| Gemma 2 2B Q8 | 3GB | 6GB | Acceptable |
| Qwen 2.5 3B Q4 | 2GB | 4GB | Basic |
### Model Download URLs
Add models to `installer.rs` data_download_list:
```rust
// GPT-OSS 20B - Recommended for small GPU
"https://huggingface.co/unsloth/gpt-oss-20b-GGUF/resolve/main/gpt-oss-20b-F16.gguf"
// DeepSeek R1 Distill - For CPU or minimal GPU
"https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M.gguf"
// Llama 3.1 8B - Good balance
"https://huggingface.co/bartowski/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf"
```
### Embedding Models
For vector search, you need an embedding model:
```csv
embedding-provider,local
embedding-server-url,https://localhost:8082
embedding-model,bge-small-en-v1.5
```
Recommended embedding models:
| Model | Dimensions | Size | Quality |
|-------|------------|------|---------|
| bge-small-en-v1.5 | 384 | 130MB | Good |
| bge-base-en-v1.5 | 768 | 440MB | Better |
| bge-large-en-v1.5 | 1024 | 1.3GB | Best |
| nomic-embed-text | 768 | 550MB | Good |
## Hybrid Configuration
Use different models for different tasks:
```csv
# Primary model for complex conversations
llm-provider,anthropic
llm-model,claude-3-5-sonnet-20241022
# Fast model for simple tasks
llm-fast-provider,groq
llm-fast-model,llama-3.1-8b-instant
# Local fallback for offline operation
llm-fallback-provider,local
llm-fallback-model,DeepSeek-R1-Distill-Qwen-1.5B
# Embeddings always local
embedding-provider,local
embedding-model,bge-small-en-v1.5
```
## Model Selection Guide
### By Use Case
| Use Case | Recommended | Why |
|----------|-------------|-----|
| Customer support | Claude 3.5 Sonnet | Best at following guidelines |
| Code generation | DeepSeek-Coder, GPT-4o | Specialized for code |
| Document analysis | Gemini 1.5 Pro | 2M context window |
| Real-time chat | Groq Llama 3.1 8B | Fastest responses |
| Privacy-sensitive | Local DeepSeek-R1 | No external data transfer |
| Cost-sensitive | DeepSeek-V3, Local | Lowest cost per token |
| Complex reasoning | Claude 3 Opus, o1 | Best reasoning ability |
### By Budget
| Budget | Recommended Setup |
|--------|-------------------|
| Free | Local models only |
| Low ($10-50/mo) | Groq + Local fallback |
| Medium ($50-200/mo) | GPT-4o-mini + Claude Haiku |
| High ($200+/mo) | GPT-4o + Claude Sonnet |
| Enterprise | Private deployment + premium APIs |
## Configuration Reference
### Environment Variables
```bash
# Primary LLM
LLM_PROVIDER=openai
LLM_API_KEY=sk-xxx
LLM_MODEL=gpt-4o
LLM_SERVER_URL=https://api.openai.com
# Local LLM Server
LLM_LOCAL_URL=https://localhost:8081
LLM_LOCAL_MODEL=DeepSeek-R1-Distill-Qwen-1.5B
# Embedding
EMBEDDING_PROVIDER=local
EMBEDDING_URL=https://localhost:8082
EMBEDDING_MODEL=bge-small-en-v1.5
```
### config.csv Parameters
| Parameter | Description | Example |
|-----------|-------------|---------|
| `llm-provider` | Provider name | `openai`, `anthropic`, `local` |
| `llm-api-key` | API key for cloud providers | `sk-xxx` |
| `llm-model` | Model identifier | `gpt-4o` |
| `llm-server-url` | API endpoint | `https://api.openai.com` |
| `llm-server-ctx-size` | Context window size | `128000` |
| `llm-temperature` | Response randomness (0-2) | `0.7` |
| `llm-max-tokens` | Maximum response length | `4096` |
| `llm-cache-enabled` | Enable semantic caching | `true` |
| `llm-cache-ttl` | Cache time-to-live (seconds) | `3600` |
## Security Considerations
### Cloud Providers
- API keys should be stored in environment variables or secrets manager
- Consider data residency requirements (EU: Mistral, US: OpenAI)
- Review provider data retention policies
- Use separate keys for production/development
### Local Models
- All data stays on your infrastructure
- No internet required after model download
- Full control over model versions
- Consider GPU security for sensitive deployments
## Performance Optimization
### Caching
Enable semantic caching to reduce API calls:
```csv
llm-cache-enabled,true
llm-cache-ttl,3600
llm-cache-similarity-threshold,0.92
```
### Batching
For bulk operations, use batch APIs when available:
```csv
llm-batch-enabled,true
llm-batch-size,10
```
### Context Management
Optimize context window usage:
```csv
llm-context-compaction,true
llm-max-history-turns,10
llm-summarize-long-contexts,true
```
## Troubleshooting
### Common Issues
**API Key Invalid**
- Verify key is correct and not expired
- Check if key has required permissions
- Ensure billing is active
**Model Not Found**
- Check model name spelling
- Verify model is available in your region
- Some models require waitlist access
**Rate Limits**
- Implement exponential backoff
- Use caching to reduce calls
- Consider upgrading API tier
**Local Model Slow**
- Check GPU memory usage
- Reduce context size
- Use quantized models (Q4 instead of F16)
### Logging
Enable LLM logging for debugging:
```csv
llm-log-requests,true
llm-log-responses,false
llm-log-timing,true
```
## Next Steps
- [LLM Configuration](../chapter-08-config/llm-config.md) - Detailed configuration guide
- [Semantic Caching](../chapter-03/caching.md) - Cache configuration
- [NVIDIA GPU Setup](../chapter-09-api/nvidia-gpu-setup.md) - GPU configuration for local models

View file

@ -0,0 +1,364 @@
# KB Statistics Keywords
Knowledge Base Statistics keywords provide real-time information about your Qdrant vector database collections. Use these keywords to monitor document counts, storage usage, and indexing activity.
## Overview
These keywords are useful for:
- **Administration**: Monitor KB health and growth
- **Dashboards**: Display statistics in admin interfaces
- **Automation**: Trigger actions based on KB state
- **Compliance**: Track document retention and storage
## Available Keywords
| Keyword | Returns | Description |
|---------|---------|-------------|
| `KB STATISTICS` | JSON | Complete statistics for all collections |
| `KB COLLECTION STATS` | JSON | Statistics for a specific collection |
| `KB DOCUMENTS COUNT` | Integer | Total document count for bot |
| `KB DOCUMENTS ADDED SINCE` | Integer | Documents added in last N days |
| `KB LIST COLLECTIONS` | Array | List of collection names |
| `KB STORAGE SIZE` | Float | Total storage in MB |
## KB STATISTICS
Returns comprehensive statistics about all knowledge base collections for the current bot.
### Syntax
```basic
stats = KB STATISTICS
```
### Return Value
JSON string containing:
```json
{
"total_collections": 3,
"total_documents": 5000,
"total_vectors": 5000,
"total_disk_size_mb": 125.5,
"total_ram_size_mb": 62.3,
"documents_added_last_week": 150,
"documents_added_last_month": 620,
"collections": [
{
"name": "kb_bot-id_main",
"vectors_count": 3000,
"points_count": 3000,
"segments_count": 2,
"disk_data_size": 78643200,
"ram_data_size": 39321600,
"indexed_vectors_count": 3000,
"status": "green"
}
]
}
```
### Example
```basic
REM Get and display KB statistics
stats = KB STATISTICS
statsObj = JSON PARSE stats
TALK "Your knowledge base has " + statsObj.total_documents + " documents"
TALK "Using " + FORMAT(statsObj.total_disk_size_mb, "#,##0.00") + " MB of storage"
IF statsObj.documents_added_last_week > 100 THEN
TALK "High activity! " + statsObj.documents_added_last_week + " documents added this week"
END IF
```
## KB COLLECTION STATS
Returns detailed statistics for a specific Qdrant collection.
### Syntax
```basic
stats = KB COLLECTION STATS collection_name
```
### Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `collection_name` | String | Name of the collection |
### Return Value
JSON string with collection details:
```json
{
"name": "kb_bot-id_products",
"vectors_count": 1500,
"points_count": 1500,
"segments_count": 1,
"disk_data_size": 52428800,
"ram_data_size": 26214400,
"indexed_vectors_count": 1500,
"status": "green"
}
```
### Example
```basic
REM Check specific collection health
collections = KB LIST COLLECTIONS
FOR EACH collection IN collections
stats = KB COLLECTION STATS collection
collObj = JSON PARSE stats
IF collObj.status <> "green" THEN
TALK "Warning: Collection " + collection + " status is " + collObj.status
END IF
NEXT
```
## KB DOCUMENTS COUNT
Returns the total number of documents indexed for the current bot.
### Syntax
```basic
count = KB DOCUMENTS COUNT
```
### Return Value
Integer representing total document count.
### Example
```basic
docCount = KB DOCUMENTS COUNT
IF docCount = 0 THEN
TALK "Your knowledge base is empty. Upload some documents to get started!"
ELSE
TALK "You have " + FORMAT(docCount, "#,##0") + " documents in your knowledge base"
END IF
```
## KB DOCUMENTS ADDED SINCE
Returns the number of documents added within the specified number of days.
### Syntax
```basic
count = KB DOCUMENTS ADDED SINCE days
```
### Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `days` | Integer | Number of days to look back |
### Return Value
Integer representing documents added in the time period.
### Example
```basic
REM Activity report
lastDay = KB DOCUMENTS ADDED SINCE 1
lastWeek = KB DOCUMENTS ADDED SINCE 7
lastMonth = KB DOCUMENTS ADDED SINCE 30
TALK "Document Activity Report"
TALK "Last 24 hours: " + lastDay + " documents"
TALK "Last 7 days: " + lastWeek + " documents"
TALK "Last 30 days: " + lastMonth + " documents"
REM Calculate daily average
IF lastWeek > 0 THEN
avgDaily = lastWeek / 7
TALK "Daily average: " + FORMAT(avgDaily, "#,##0.0")
END IF
```
## KB LIST COLLECTIONS
Returns an array of all collection names belonging to the current bot.
### Syntax
```basic
collections = KB LIST COLLECTIONS
```
### Return Value
Array of collection name strings.
### Example
```basic
collections = KB LIST COLLECTIONS
IF LEN(collections) = 0 THEN
TALK "No collections found"
ELSE
TALK "Your collections:"
FOR EACH name IN collections
TALK " - " + name
NEXT
END IF
```
## KB STORAGE SIZE
Returns the total disk storage used by all collections in megabytes.
### Syntax
```basic
sizeMB = KB STORAGE SIZE
```
### Return Value
Float representing storage size in MB.
### Example
```basic
storageMB = KB STORAGE SIZE
TALK "Storage used: " + FORMAT(storageMB, "#,##0.00") + " MB"
REM Alert if storage is high
IF storageMB > 1000 THEN
TALK "Warning: Knowledge base exceeds 1 GB. Consider archiving old documents."
END IF
```
## Complete Example: KB Dashboard
```basic
REM Knowledge Base Dashboard
REM Displays comprehensive statistics
DESCRIPTION "View knowledge base statistics and health"
TALK "📊 **Knowledge Base Dashboard**"
TALK ""
REM Get overall statistics
stats = KB STATISTICS
statsObj = JSON PARSE stats
REM Summary section
TALK "**Summary**"
TALK "Collections: " + statsObj.total_collections
TALK "Documents: " + FORMAT(statsObj.total_documents, "#,##0")
TALK "Vectors: " + FORMAT(statsObj.total_vectors, "#,##0")
TALK ""
REM Storage section
TALK "**Storage**"
TALK "Disk: " + FORMAT(statsObj.total_disk_size_mb, "#,##0.00") + " MB"
TALK "RAM: " + FORMAT(statsObj.total_ram_size_mb, "#,##0.00") + " MB"
TALK ""
REM Activity section
TALK "**Recent Activity**"
TALK "Last 7 days: " + FORMAT(statsObj.documents_added_last_week, "#,##0") + " documents"
TALK "Last 30 days: " + FORMAT(statsObj.documents_added_last_month, "#,##0") + " documents"
REM Calculate growth rate
IF statsObj.documents_added_last_month > 0 THEN
growthRate = (statsObj.documents_added_last_week / (statsObj.documents_added_last_month / 4)) * 100 - 100
IF growthRate > 0 THEN
TALK "Growth trend: +" + FORMAT(growthRate, "#,##0") + "% vs average"
ELSE
TALK "Growth trend: " + FORMAT(growthRate, "#,##0") + "% vs average"
END IF
END IF
REM Health check
TALK ""
TALK "**Health Status**"
allHealthy = true
FOR EACH coll IN statsObj.collections
IF coll.status <> "green" THEN
TALK "⚠️ " + coll.name + ": " + coll.status
allHealthy = false
END IF
NEXT
IF allHealthy THEN
TALK "✅ All collections healthy"
END IF
REM Store for dashboard
SET BOT MEMORY "kb_last_check", NOW()
SET BOT MEMORY "kb_total_docs", statsObj.total_documents
SET BOT MEMORY "kb_storage_mb", statsObj.total_disk_size_mb
```
## Use Cases
### 1. Admin Monitoring Bot
```basic
REM Daily KB health check
SET SCHEDULE "kb-health" TO "0 8 * * *"
stats = KB STATISTICS
statsObj = JSON PARSE stats
IF statsObj.total_disk_size_mb > 5000 THEN
SEND MAIL "admin@example.com", "KB Storage Alert",
"Knowledge base storage exceeds 5 GB: " + statsObj.total_disk_size_mb + " MB"
END IF
END SCHEDULE
```
### 2. User-Facing Statistics
```basic
REM Show user their document count
docCount = KB DOCUMENTS COUNT
TALK "Your bot has learned from " + docCount + " documents"
TALK "Ask me anything about your content!"
```
### 3. Compliance Reporting
```basic
REM Monthly compliance report
lastMonth = KB DOCUMENTS ADDED SINCE 30
storageSize = KB STORAGE SIZE
report = "Monthly KB Report\n"
report = report + "Documents added: " + lastMonth + "\n"
report = report + "Total storage: " + FORMAT(storageSize, "#,##0.00") + " MB\n"
SEND MAIL "compliance@example.com", "Monthly KB Report", report
```
## Notes
- Statistics are fetched in real-time from Qdrant
- Large collections may have slight delays in statistics updates
- Document counts from the database may differ slightly from vector counts if indexing is in progress
- Collection names follow the pattern `kb_{bot_id}_{collection_name}`
## See Also
- [USE KB](./keyword-use-kb.md) - Load knowledge base for queries
- [CLEAR KB](./keyword-clear-kb.md) - Clear knowledge base
- [Vector Collections](../chapter-03/vector-collections.md) - Understanding collections

View file

@ -0,0 +1,834 @@
# Infrastructure Design
This chapter covers the complete infrastructure design for General Bots, including scaling, security, secrets management, observability, and high availability.
## Architecture Overview
General Bots uses a modular architecture where each component runs in isolated LXC containers. This provides:
- **Isolation**: Each service has its own filesystem and process space
- **Scalability**: Add more containers to handle increased load
- **Security**: Compromised components cannot affect others
- **Portability**: Move containers between hosts easily
## Component Diagram
```
┌──────────────────────────────────────────────────────────────────────────────┐
│ Load Balancer (Caddy) │
│ Rate Limiting │ TLS Termination │
└─────────────────────────────────┬────────────────────────────────────────────┘
┌────────────────────────┼────────────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ BotServer 1 │ │ BotServer 2 │ │ BotServer N │
│ (LXC/Auto) │ │ (LXC/Auto) │ │ (LXC/Auto) │
└────────┬────────┘ └────────┬────────┘ └────────┬────────┘
│ │ │
└──────────────────────┼──────────────────────┘
┌───────────────────────────┼───────────────────────────┐
│ │ │
▼ ▼ ▼
┌─────────┐ ┌─────────────┐ ┌──────────┐
│ Secrets │ │ Data Layer │ │ Services │
│ (Vault) │ │ │ │ │
└─────────┘ │ PostgreSQL │ │ Zitadel │
│ Redis │ │ LiveKit │
│ Qdrant │ │ Stalwart │
│ InfluxDB │ │ MinIO │
│ MinIO │ │ Forgejo │
└─────────────┘ └──────────┘
```
## Encryption at Rest
All data stored by General Bots is encrypted at rest using AES-256-GCM.
### Database Encryption
PostgreSQL uses Transparent Data Encryption (TDE):
```csv
# config.csv
encryption-at-rest,true
encryption-algorithm,aes-256-gcm
encryption-key-source,vault
```
Enable in PostgreSQL:
```sql
-- Enable pgcrypto extension
CREATE EXTENSION IF NOT EXISTS pgcrypto;
-- Encrypted columns use pgp_sym_encrypt
ALTER TABLE bot_memories
ADD COLUMN value_encrypted bytea;
UPDATE bot_memories
SET value_encrypted = pgp_sym_encrypt(value, current_setting('app.encryption_key'));
```
### File Storage Encryption
MinIO server-side encryption:
```bash
# Enable SSE-S3 encryption
mc encrypt set sse-s3 local/gbo-bucket
# Or use customer-managed keys (SSE-C)
mc encrypt set sse-c local/gbo-bucket
```
Configuration:
```csv
# config.csv
drive-encryption,true
drive-encryption-type,sse-s3
drive-encryption-key,vault:gbo/encryption/drive_key
```
### Redis Encryption
Redis with TLS and encrypted RDB:
```conf
# redis.conf
tls-port 6379
port 0
tls-cert-file /opt/gbo/conf/certificates/redis/server.crt
tls-key-file /opt/gbo/conf/certificates/redis/server.key
tls-ca-cert-file /opt/gbo/conf/certificates/ca.crt
# Enable RDB encryption (Redis 7.2+)
rdb-save-incremental-fsync yes
```
### Vector Database Encryption
Qdrant with encrypted storage:
```yaml
# qdrant/config.yaml
storage:
storage_path: /opt/gbo/data/qdrant
on_disk_payload: true
service:
enable_tls: true
# Disk encryption handled at filesystem level
```
### Filesystem-Level Encryption
For comprehensive encryption, use LUKS on the data partition:
```bash
# Create encrypted partition for /opt/gbo/data
cryptsetup luksFormat /dev/sdb1
cryptsetup open /dev/sdb1 gbo-data
mkfs.ext4 /dev/mapper/gbo-data
mount /dev/mapper/gbo-data /opt/gbo/data
```
## Media Processing: LiveKit vs GStreamer
### Do You Need GStreamer with LiveKit?
**Short answer: No.** LiveKit handles most media processing needs.
| Feature | LiveKit | GStreamer | Need Both? |
|---------|---------|-----------|------------|
| WebRTC | Native | Plugin | No |
| Recording | Built-in | External | No |
| Transcoding | Egress service | Full control | Rarely |
| Streaming | Native | Full control | Rarely |
| Custom filters | Limited | Extensive | Sometimes |
| AI integration | Built-in | Manual | No |
**Use GStreamer only if you need:**
- Custom video/audio filters
- Unusual codec support
- Complex media pipelines
- Broadcast streaming (RTMP/HLS)
LiveKit's Egress service handles:
- Room recording
- Participant recording
- Livestreaming to YouTube/Twitch
- Track composition
### LiveKit Configuration
```csv
# config.csv
meet-provider,livekit
meet-server-url,wss://localhost:7880
meet-api-key,vault:gbo/meet/api_key
meet-api-secret,vault:gbo/meet/api_secret
meet-recording-enabled,true
meet-transcription-enabled,true
```
## Message Queues: Kafka vs RabbitMQ
### Do You Need Kafka or RabbitMQ?
**For most deployments: No.** Redis PubSub handles messaging needs.
| Scale | Recommendation |
|-------|----------------|
| < 1,000 concurrent users | Redis PubSub |
| 1,000 - 10,000 users | Redis Streams |
| 10,000 - 100,000 users | RabbitMQ |
| > 100,000 users | Kafka |
### When to Add Message Queues
**Add RabbitMQ when you need:**
- Message persistence/durability
- Complex routing patterns
- Multiple consumer groups
- Dead letter queues
**Add Kafka when you need:**
- Event sourcing
- Stream processing
- Multi-datacenter replication
- High throughput (millions/sec)
### Current Redis-Based Messaging
General Bots uses Redis for:
```rust
// Session state
redis::cmd("SET").arg("session:123").arg(state_json)
// PubSub for real-time
redis::cmd("PUBLISH").arg("channel:bot-1").arg(message)
// Streams for persistence (optional)
redis::cmd("XADD").arg("stream:events").arg("*").arg("event").arg(data)
```
Configuration:
```csv
# config.csv
messaging-provider,redis
messaging-persistence,streams
messaging-retention-hours,24
```
## Sharding Strategies
### Option 1: Tenant-Based Sharding (Recommended)
Each tenant/organization gets isolated databases:
```
┌─────────────────────────────────────────────────────────────────┐
│ Router │
│ (tenant_id → database mapping) │
└─────────────────────────────┬───────────────────────────────────┘
┌─────────────────────┼─────────────────────┐
│ │ │
▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ tenant_001 │ │ tenant_002 │ │ tenant_003 │
│ PostgreSQL │ │ PostgreSQL │ │ PostgreSQL │
│ Redis │ │ Redis │ │ Redis │
│ Qdrant │ │ Qdrant │ │ Qdrant │
└───────────────┘ └───────────────┘ └───────────────┘
```
Configuration:
```csv
# config.csv
shard-strategy,tenant
shard-auto-provision,true
shard-isolation-level,database
```
**Advantages:**
- Complete data isolation (compliance friendly)
- Easy backup/restore per tenant
- Simple to understand
- No cross-tenant queries
**Disadvantages:**
- More resources per tenant
- Complex tenant migration
- Connection pool overhead
### Option 2: Hash-Based Sharding
Distribute by user/session ID hash:
```
user_id = 12345
shard = hash(12345) % num_shards = 2
→ Route to shard-2
```
Configuration:
```csv
# config.csv
shard-strategy,hash
shard-count,4
shard-key,user_id
shard-algorithm,consistent-hash
```
**Advantages:**
- Even distribution
- Predictable routing
- Good for high-volume single-tenant
**Disadvantages:**
- Resharding is complex
- Cross-shard queries difficult
- No tenant isolation
### Option 3: Time-Based Sharding
For time-series data (logs, analytics):
```csv
# config.csv
shard-strategy,time
shard-interval,monthly
shard-retention-months,12
shard-auto-archive,true
```
Automatically creates partitions:
```
messages_2024_01
messages_2024_02
messages_2024_03
...
```
### Option 4: Geographic Sharding
Route by user location:
```csv
# config.csv
shard-strategy,geo
shard-regions,us-east,eu-west,ap-south
shard-default,us-east
shard-detection,ip
```
```
┌─────────────────────────────────────────────────────────────┐
│ Global Router │
│ (GeoIP → Region mapping) │
└─────────────────────────────┬───────────────────────────────┘
┌─────────────────────┼─────────────────────┐
│ │ │
▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ US-East │ │ EU-West │ │ AP-South │
│ Cluster │ │ Cluster │ │ Cluster │
└───────────────┘ └───────────────┘ └───────────────┘
```
## Auto-Scaling with LXC
### Configuration
```csv
# config.csv - Auto-scaling settings
scale-enabled,true
scale-min-instances,1
scale-max-instances,10
scale-cpu-threshold,70
scale-memory-threshold,80
scale-request-threshold,1000
scale-cooldown-seconds,300
scale-check-interval,30
```
### Scaling Rules
| Metric | Scale Up | Scale Down |
|--------|----------|------------|
| CPU | > 70% for 2 min | < 30% for 5 min |
| Memory | > 80% for 2 min | < 40% for 5 min |
| Requests/sec | > 1000 | < 200 |
| Response time | > 2000ms | < 500ms |
| Queue depth | > 100 | < 10 |
### Auto-Scale Service
The auto-scaler runs as a systemd service:
```ini
# /etc/systemd/system/gbo-autoscale.service
[Unit]
Description=General Bots Auto-Scaler
After=network.target
[Service]
Type=simple
ExecStart=/opt/gbo/scripts/autoscale.sh
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
```
### Container Lifecycle
```
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ Create │ ──▶ │Configure │ ──▶ │ Start │ ──▶ │ Ready │
│Container │ │Resources │ │BotServer │ │(In Pool) │
└──────────┘ └──────────┘ └──────────┘ └──────────┘
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ Delete │ ◀── │ Stop │ ◀── │ Drain │ ◀── │ Active │
│Container │ │BotServer │ │ Conns │ │(Serving) │
└──────────┘ └──────────┘ └──────────┘ └──────────┘
```
## Load Balancing
### Caddy Configuration
```caddyfile
{
admin off
auto_https on
}
bot.example.com {
# Rate limiting
rate_limit {
zone api {
key {remote_host}
events 100
window 1m
}
}
# WebSocket (sticky sessions)
handle /ws* {
reverse_proxy botserver-1:8080 botserver-2:8080 {
lb_policy cookie
health_uri /api/health
health_interval 10s
}
}
# API (round robin)
handle /api/* {
reverse_proxy botserver-1:8080 botserver-2:8080 {
lb_policy round_robin
fail_duration 30s
}
}
}
```
### Rate Limiting Configuration
```csv
# config.csv - Rate limiting
rate-limit-enabled,true
rate-limit-requests,100
rate-limit-window,60
rate-limit-burst,20
rate-limit-by,ip
# Per-endpoint limits
rate-limit-api-chat,30
rate-limit-api-files,50
rate-limit-api-auth,10
rate-limit-api-llm,20
```
## Failover Systems
### Health Checks
Every service exposes `/health`:
```json
{
"status": "healthy",
"version": "6.1.0",
"checks": {
"database": {"status": "ok", "latency_ms": 5},
"cache": {"status": "ok", "latency_ms": 2},
"vectordb": {"status": "ok", "latency_ms": 10},
"llm": {"status": "ok", "latency_ms": 50}
}
}
```
### Circuit Breaker
```csv
# config.csv
circuit-breaker-enabled,true
circuit-breaker-threshold,5
circuit-breaker-timeout,30
circuit-breaker-half-open-requests,3
```
States:
- **Closed**: Normal operation, counting failures
- **Open**: Failing fast, returning errors immediately
- **Half-Open**: Testing with limited requests
### Database Failover
PostgreSQL with streaming replication:
```
┌──────────────┐ ┌──────────────┐
│ Primary │ ──────▶ │ Replica │
│ PostgreSQL │ (sync) │ PostgreSQL │
└──────────────┘ └──────────────┘
│ │
└────────┬───────────────┘
┌──────┴──────┐
│ Patroni │
│ (Failover) │
└─────────────┘
```
### Graceful Degradation
```csv
# config.csv - Fallbacks
fallback-llm-enabled,true
fallback-llm-provider,local
fallback-llm-model,DeepSeek-R1-Distill-Qwen-1.5B
fallback-cache-enabled,true
fallback-cache-mode,memory
fallback-vectordb-enabled,true
fallback-vectordb-mode,keyword-search
```
## Secrets Management (Vault)
### Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ .env (minimal) │
│ VAULT_ADDR=https://localhost:8200 │
│ VAULT_TOKEN=hvs.xxxxxxxxxxxxx │
└─────────────────────────────┬───────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Vault Server │
│ │
│ gbo/drive → accesskey, secret │
│ gbo/tables → username, password │
│ gbo/cache → password │
│ gbo/directory → client_id, client_secret │
│ gbo/email → username, password │
│ gbo/llm → openai_key, anthropic_key, groq_key │
│ gbo/encryption → master_key, data_key │
│ gbo/meet → api_key, api_secret │
│ gbo/alm → admin_password, runner_token │
└─────────────────────────────────────────────────────────────────┘
```
### Zitadel vs Vault
| Purpose | Zitadel | Vault |
|---------|---------|-------|
| User authentication | Yes | No |
| Service credentials | No | Yes |
| API keys | No | Yes |
| Encryption keys | No | Yes |
| OAuth/OIDC | Yes | No |
| MFA | Yes | No |
**Use both:**
- Zitadel: User identity, SSO, MFA
- Vault: Service secrets, encryption keys
### Minimal .env with Vault
```bash
# .env - Only Vault and Directory needed
VAULT_ADDR=https://localhost:8200
VAULT_TOKEN=hvs.your-token-here
# Directory for user auth (Zitadel)
DIRECTORY_URL=https://localhost:8080
DIRECTORY_PROJECT_ID=your-project-id
# All other secrets fetched from Vault at runtime
```
## Observability
### Option 1: InfluxDB + Grafana (Current)
For time-series metrics:
```csv
# config.csv
observability-provider,influxdb
observability-url,http://localhost:8086
observability-org,pragmatismo
observability-bucket,metrics
```
### Option 2: Vector + InfluxDB (Recommended)
Vector as log/metric aggregator:
```
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ BotServer │ ──▶ │ Vector │ ──▶ │ InfluxDB │
│ Logs │ │ (Pipeline) │ │ (Metrics) │
└─────────────┘ └──────┬──────┘ └─────────────┘
┌─────────────┐
│ Grafana │
│ (Dashboard) │
└─────────────┘
```
Vector configuration:
```toml
# vector.toml
[sources.botserver_logs]
type = "file"
include = ["/opt/gbo/logs/*.log"]
[transforms.parse_logs]
type = "remap"
inputs = ["botserver_logs"]
source = '''
. = parse_json!(.message)
'''
[sinks.influxdb]
type = "influxdb_metrics"
inputs = ["parse_logs"]
endpoint = "http://localhost:8086"
org = "pragmatismo"
bucket = "metrics"
```
### Replacing log.* Calls with Vector
Instead of replacing all log calls, configure Vector to:
1. Collect logs from files
2. Parse and enrich
3. Route to appropriate sinks
```toml
# Route errors to alerts
[transforms.filter_errors]
type = "filter"
inputs = ["parse_logs"]
condition = '.level == "error"'
[sinks.alertmanager]
type = "http"
inputs = ["filter_errors"]
uri = "http://alertmanager:9093/api/v1/alerts"
```
## Full-Text Search: Tantivy vs Qdrant
### Comparison
| Feature | Tantivy | Qdrant |
|---------|---------|--------|
| Type | Full-text search | Vector search |
| Query | Keywords, boolean | Semantic similarity |
| Results | Exact matches | Similar meanings |
| Speed | Very fast | Fast |
| Use case | Known keywords | Natural language |
### Do You Need Tantivy?
**Usually no.** Qdrant handles both:
- Vector similarity search (semantic)
- Payload filtering (keyword-like)
Use Tantivy only if you need:
- BM25 ranking
- Complex boolean queries
- Phrase matching
- Faceted search
### Hybrid Search with Qdrant
Qdrant supports hybrid search:
```rust
// Combine vector similarity + keyword filter
let search_request = SearchPoints {
collection_name: "kb".to_string(),
vector: query_embedding,
limit: 10,
filter: Some(Filter {
must: vec![
Condition::Field(FieldCondition {
key: "content".to_string(),
r#match: Some(Match::Text("keyword".to_string())),
}),
],
..Default::default()
}),
..Default::default()
};
```
## Workflow Scheduling: Temporal
### When to Use Temporal
Temporal is useful for:
- Long-running workflows (hours/days)
- Retry logic with exponential backoff
- Workflow versioning
- Distributed transactions
### Current Alternative: SET SCHEDULE
For simple scheduling, General Bots uses:
```basic
REM Run every day at 9 AM
SET SCHEDULE "daily-report" TO "0 9 * * *"
TALK "Running daily report..."
result = GET "/api/reports/daily"
SEND MAIL "admin@example.com", "Daily Report", result
END SCHEDULE
```
### Adding Temporal
If you need complex workflows:
```csv
# config.csv
workflow-provider,temporal
workflow-server,localhost:7233
workflow-namespace,botserver
```
Example workflow:
```basic
REM Temporal workflow
START WORKFLOW "onboarding"
STEP "welcome"
SEND MAIL user_email, "Welcome!", "Welcome to our service"
WAIT 1, "day"
STEP "followup"
IF NOT user_completed_profile THEN
SEND MAIL user_email, "Complete Profile", "..."
WAIT 3, "days"
END IF
STEP "activation"
IF user_completed_profile THEN
CALL activate_user(user_id)
END IF
END WORKFLOW
```
## MFA with Zitadel
### Configuration
MFA is handled transparently by Zitadel:
```csv
# config.csv
auth-mfa-enabled,true
auth-mfa-methods,totp,sms,email,whatsapp
auth-mfa-required-for,admin,sensitive-operations
auth-mfa-grace-period-days,7
```
### Zitadel MFA Settings
In Zitadel console:
1. Go to Settings → Login Behavior
2. Enable "Multi-Factor Authentication"
3. Select allowed methods:
- TOTP (authenticator apps)
- SMS
- Email
- WebAuthn/FIDO2
### WhatsApp MFA Channel
```csv
# config.csv
auth-mfa-whatsapp-enabled,true
auth-mfa-whatsapp-provider,twilio
auth-mfa-whatsapp-template,mfa_code
```
Flow:
1. User logs in with password
2. Zitadel triggers MFA
3. Code sent via WhatsApp
4. User enters code
5. Session established
## Summary: What You Need
| Component | Required | Recommended | Optional |
|-----------|----------|-------------|----------|
| PostgreSQL | Yes | - | - |
| Redis | Yes | - | - |
| Qdrant | Yes | - | - |
| MinIO | Yes | - | - |
| Zitadel | Yes | - | - |
| Vault | - | Yes | - |
| InfluxDB | - | Yes | - |
| LiveKit | - | Yes | - |
| Vector | - | - | Yes |
| Kafka | - | - | Yes |
| RabbitMQ | - | - | Yes |
| Temporal | - | - | Yes |
| GStreamer | - | - | Yes |
| Tantivy | - | - | Yes |
## Next Steps
- [Scaling and Load Balancing](./scaling.md) - Detailed scaling guide
- [Container Deployment](./containers.md) - LXC setup
- [Security Features](../chapter-12-auth/security-features.md) - Security deep dive
- [LLM Providers](../appendix-external-services/llm-providers.md) - Model selection

View file

@ -0,0 +1,513 @@
# Observability
General Bots uses a comprehensive observability stack for monitoring, logging, and metrics collection. This chapter explains how logging works and how Vector integrates without requiring code changes.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ BotServer Application │
│ │
│ log::trace!() ──┐ │
│ log::debug!() ──┼──▶ Log Files (./botserver-stack/logs/) │
│ log::info!() ──┤ │
│ log::warn!() ──┤ │
│ log::error!() ──┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────────────┐
│ Vector Agent │
│ (Collects from log files) │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Sources │ ──▶ │ Transforms │ ──▶ │ Sinks │ │
│ │ (Files) │ │ (Parse) │ │ (Outputs) │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
┌─────────────────┼─────────────────┐
│ │ │
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ InfluxDB │ │ Grafana │ │ Alerts │
│ (Metrics) │ │(Dashboard)│ │(Webhook) │
└───────────┘ └───────────┘ └───────────┘
```
## No Code Changes Required
**You do NOT need to replace `log::trace!()`, `log::info!()`, `log::error!()` calls.**
Vector works by:
1. **Tailing log files** - Reads from `./botserver-stack/logs/`
2. **Parsing log lines** - Extracts level, timestamp, message
3. **Routing by level** - Sends errors to alerts, metrics to InfluxDB
4. **Enriching data** - Adds hostname, service name, etc.
Log directory structure:
- `logs/system/` - BotServer application logs
- `logs/drive/` - MinIO logs
- `logs/tables/` - PostgreSQL logs
- `logs/cache/` - Redis logs
- `logs/llm/` - LLM server logs
- `logs/email/` - Stalwart logs
- `logs/directory/` - Zitadel logs
- `logs/vectordb/` - Qdrant logs
- `logs/meet/` - LiveKit logs
- `logs/alm/` - Forgejo logs
This approach:
- Requires zero code changes
- Works with existing logging
- Can be added/removed without recompilation
- Scales independently from the application
## Vector Configuration
### Installation
Vector is installed as the **observability** component:
```bash
./botserver install observability
```
### Configuration File
Configuration is at `./botserver-stack/conf/monitoring/vector.toml`:
```toml
# Vector Configuration for General Bots
# Collects logs without requiring code changes
# Component: observability (Vector)
# Config: ./botserver-stack/conf/monitoring/vector.toml
#
# SOURCES - Where logs come from
#
[sources.botserver_logs]
type = "file"
include = ["./botserver-stack/logs/system/*.log"]
read_from = "beginning"
[sources.drive_logs]
type = "file"
include = ["./botserver-stack/logs/drive/*.log"]
read_from = "beginning"
[sources.tables_logs]
type = "file"
include = ["./botserver-stack/logs/tables/*.log"]
read_from = "beginning"
[sources.cache_logs]
type = "file"
include = ["./botserver-stack/logs/cache/*.log"]
read_from = "beginning"
[sources.llm_logs]
type = "file"
include = ["./botserver-stack/logs/llm/*.log"]
read_from = "beginning"
[sources.service_logs]
type = "file"
include = [
"./botserver-stack/logs/email/*.log",
"./botserver-stack/logs/directory/*.log",
"./botserver-stack/logs/vectordb/*.log",
"./botserver-stack/logs/meet/*.log",
"./botserver-stack/logs/alm/*.log"
]
read_from = "beginning"
#
# TRANSFORMS - Parse and enrich logs
#
[transforms.parse_botserver]
type = "remap"
inputs = ["botserver_logs"]
source = '''
# Parse standard log format: [TIMESTAMP LEVEL target] message
. = parse_regex!(.message, r'^(?P<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+Z?)\s+(?P<level>\w+)\s+(?P<target>\S+)\s+(?P<message>.*)$')
# Convert timestamp
.timestamp = parse_timestamp!(.timestamp, "%Y-%m-%dT%H:%M:%S%.fZ")
# Normalize level
.level = downcase!(.level)
# Add service name
.service = "botserver"
# Extract session_id if present
if contains(string!(.message), "session") {
session_match = parse_regex(.message, r'session[:\s]+(?P<session_id>[a-f0-9-]+)') ?? {}
if exists(session_match.session_id) {
.session_id = session_match.session_id
}
}
# Extract user_id if present
if contains(string!(.message), "user") {
user_match = parse_regex(.message, r'user[:\s]+(?P<user_id>[a-f0-9-]+)') ?? {}
if exists(user_match.user_id) {
.user_id = user_match.user_id
}
}
'''
[transforms.parse_service_logs]
type = "remap"
inputs = ["service_logs"]
source = '''
# Basic parsing for service logs
.timestamp = now()
.level = "info"
# Detect errors
if contains(string!(.message), "ERROR") || contains(string!(.message), "error") {
.level = "error"
}
if contains(string!(.message), "WARN") || contains(string!(.message), "warn") {
.level = "warn"
}
# Extract service name from file path
.service = replace(string!(.file), r'.*/(\w+)\.log$', "$1")
'''
#
# FILTERS - Route by log level
#
[transforms.filter_errors]
type = "filter"
inputs = ["parse_botserver", "parse_service_logs"]
condition = '.level == "error"'
[transforms.filter_warnings]
type = "filter"
inputs = ["parse_botserver", "parse_service_logs"]
condition = '.level == "warn"'
[transforms.filter_info]
type = "filter"
inputs = ["parse_botserver"]
condition = '.level == "info" || .level == "debug"'
#
# METRICS - Convert logs to metrics
#
[transforms.log_to_metrics]
type = "log_to_metric"
inputs = ["parse_botserver"]
[[transforms.log_to_metrics.metrics]]
type = "counter"
field = "level"
name = "log_events_total"
tags.level = "{{level}}"
tags.service = "{{service}}"
[[transforms.log_to_metrics.metrics]]
type = "counter"
field = "message"
name = "errors_total"
tags.service = "{{service}}"
increment_by_value = false
#
# SINKS - Where logs go
#
# All logs to file (backup)
[sinks.file_backup]
type = "file"
inputs = ["parse_botserver", "parse_service_logs"]
path = "./botserver-stack/logs/vector/all-%Y-%m-%d.log"
encoding.codec = "json"
# Metrics to InfluxDB
[sinks.influxdb]
type = "influxdb_metrics"
inputs = ["log_to_metrics"]
endpoint = "http://localhost:8086"
org = "pragmatismo"
bucket = "metrics"
token = "${INFLUXDB_TOKEN}"
# Errors to alerting (webhook)
[sinks.alert_webhook]
type = "http"
inputs = ["filter_errors"]
uri = "http://localhost:8080/api/admin/alerts"
method = "post"
encoding.codec = "json"
# Console output (for debugging)
[sinks.console]
type = "console"
inputs = ["filter_errors"]
encoding.codec = "text"
```
## Log Format
BotServer uses the standard Rust `log` crate format:
```
2024-01-15T10:30:45.123Z INFO botserver::core::bot Processing message for session: abc-123
2024-01-15T10:30:45.456Z DEBUG botserver::llm::cache Cache hit for prompt hash: xyz789
2024-01-15T10:30:45.789Z ERROR botserver::drive::upload Failed to upload file: permission denied
```
Vector parses this automatically without code changes.
## Metrics Collection
### Automatic Metrics
Vector converts log events to metrics:
| Metric | Description |
|--------|-------------|
| `log_events_total` | Total log events by level |
| `errors_total` | Error count by service |
| `warnings_total` | Warning count by service |
### Application Metrics
BotServer also exposes metrics via `/api/metrics` (Prometheus format):
```
# HELP botserver_sessions_active Current active sessions
# TYPE botserver_sessions_active gauge
botserver_sessions_active 42
# HELP botserver_messages_total Total messages processed
# TYPE botserver_messages_total counter
botserver_messages_total{channel="web"} 1234
botserver_messages_total{channel="whatsapp"} 567
# HELP botserver_llm_latency_seconds LLM response latency
# TYPE botserver_llm_latency_seconds histogram
botserver_llm_latency_seconds_bucket{le="0.5"} 100
botserver_llm_latency_seconds_bucket{le="1.0"} 150
botserver_llm_latency_seconds_bucket{le="2.0"} 180
```
Vector can scrape these directly:
```toml
[sources.prometheus_metrics]
type = "prometheus_scrape"
endpoints = ["http://localhost:8080/api/metrics"]
scrape_interval_secs = 15
```
## Alerting
### Error Alerts
Vector sends errors to a webhook for alerting:
```toml
[sinks.alert_webhook]
type = "http"
inputs = ["filter_errors"]
uri = "http://localhost:8080/api/admin/alerts"
method = "post"
encoding.codec = "json"
```
### Slack Integration
```toml
[sinks.slack_alerts]
type = "http"
inputs = ["filter_errors"]
uri = "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
method = "post"
encoding.codec = "json"
[sinks.slack_alerts.request]
headers.content-type = "application/json"
[sinks.slack_alerts.encoding]
codec = "json"
```
### Email Alerts
Use with an SMTP relay or webhook-to-email service:
```toml
[sinks.email_alerts]
type = "http"
inputs = ["filter_errors"]
uri = "http://localhost:8025/api/send"
method = "post"
encoding.codec = "json"
```
## Grafana Dashboards
### Pre-built Dashboard
Import the General Bots dashboard from `templates/grafana-dashboard.json`:
1. Open Grafana at `http://localhost:3000`
2. Go to Dashboards → Import
3. Upload `grafana-dashboard.json`
4. Select InfluxDB data source
### Key Panels
| Panel | Query |
|-------|-------|
| Active Sessions | `from(bucket:"metrics") \|> filter(fn: (r) => r._measurement == "sessions_active")` |
| Messages/Minute | `from(bucket:"metrics") \|> filter(fn: (r) => r._measurement == "messages_total") \|> derivative()` |
| Error Rate | `from(bucket:"metrics") \|> filter(fn: (r) => r.level == "error") \|> count()` |
| LLM Latency P95 | `from(bucket:"metrics") \|> filter(fn: (r) => r._measurement == "llm_latency") \|> quantile(q: 0.95)` |
## Configuration Options
### config.csv Settings
```csv
# Observability settings
observability-enabled,true
observability-log-level,info
observability-metrics-endpoint,/api/metrics
observability-vector-enabled,true
```
### Log Levels
| Level | When to Use |
|-------|-------------|
| `error` | Something failed, requires attention |
| `warn` | Unexpected but handled, worth noting |
| `info` | Normal operations, key events |
| `debug` | Detailed flow, development |
| `trace` | Very detailed, performance impact |
Set in config.csv:
```csv
log-level,info
```
Or environment:
```bash
RUST_LOG=info ./botserver
```
## Troubleshooting
### Vector Not Collecting Logs
```bash
# Check Vector status
systemctl status gbo-observability
# View Vector logs
journalctl -u gbo-observability -f
# Test configuration
vector validate ./botserver-stack/conf/monitoring/vector.toml
```
### Missing Metrics in InfluxDB
```bash
# Check InfluxDB connection
curl http://localhost:8086/health
# Verify bucket exists
influx bucket list
# Check Vector sink status
vector top
```
### High Log Volume
If logs are too verbose:
1. Increase log level in config.csv
2. Add filters in Vector to drop debug logs
3. Set retention policies in InfluxDB
```toml
# Drop debug logs before sending to InfluxDB
[transforms.drop_debug]
type = "filter"
inputs = ["parse_botserver"]
condition = '.level != "debug" && .level != "trace"'
```
## Best Practices
### 1. Don't Log Sensitive Data
```rust
// Bad
log::info!("User password: {}", password);
// Good
log::info!("User {} authenticated successfully", user_id);
```
### 2. Use Structured Context
```rust
// Better for parsing
log::info!("session={} user={} action=message_sent", session_id, user_id);
```
### 3. Set Appropriate Levels
```rust
// Errors: things that failed
log::error!("Database connection failed: {}", err);
// Warnings: unusual but handled
log::warn!("Retrying LLM request after timeout");
// Info: normal operations
log::info!("Session {} started", session_id);
// Debug: development details
log::debug!("Cache lookup for key: {}", key);
// Trace: very detailed
log::trace!("Entering function process_message");
```
### 4. Keep Vector Config Simple
Start with basic collection, add transforms as needed.
## Summary
- **No code changes needed** - Vector collects from log files
- **Keep using log macros** - `log::info!()`, `log::error!()`, etc.
- **Vector handles routing** - Errors to alerts, all to storage
- **InfluxDB for metrics** - Time-series storage and queries
- **Grafana for dashboards** - Visualize everything
## Next Steps
- [Scaling and Load Balancing](./scaling.md) - Scale observability with your cluster
- [Infrastructure Design](./infrastructure.md) - Full architecture overview
- [Monitoring Dashboard](../chapter-04-gbui/monitoring.md) - Built-in monitoring UI

View file

@ -0,0 +1,672 @@
# Scaling and Load Balancing
General Bots is designed to scale from a single instance to a distributed cluster using LXC containers. This chapter covers auto-scaling, load balancing, sharding strategies, and failover systems.
## Scaling Architecture
General Bots uses a **horizontal scaling** approach with LXC containers:
```
┌─────────────────┐
│ Caddy Proxy │
│ (Load Balancer)│
└────────┬────────┘
┌───────────────────┼───────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ LXC Container │ │ LXC Container │ │ LXC Container │
│ botserver-1 │ │ botserver-2 │ │ botserver-3 │
└────────┬────────┘ └────────┬────────┘ └────────┬────────┘
│ │ │
└───────────────────┼───────────────────┘
┌───────────────────┼───────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ PostgreSQL │ │ Redis │ │ Qdrant │
│ (Primary) │ │ (Cluster) │ │ (Cluster) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
```
## Auto-Scaling Configuration
### config.csv Parameters
Configure auto-scaling behavior in your bot's `config.csv`:
```csv
# Auto-scaling settings
scale-enabled,true
scale-min-instances,1
scale-max-instances,10
scale-cpu-threshold,70
scale-memory-threshold,80
scale-request-threshold,1000
scale-cooldown-seconds,300
scale-check-interval,30
```
| Parameter | Description | Default |
|-----------|-------------|---------|
| `scale-enabled` | Enable auto-scaling | `false` |
| `scale-min-instances` | Minimum container count | `1` |
| `scale-max-instances` | Maximum container count | `10` |
| `scale-cpu-threshold` | CPU % to trigger scale-up | `70` |
| `scale-memory-threshold` | Memory % to trigger scale-up | `80` |
| `scale-request-threshold` | Requests/min to trigger scale-up | `1000` |
| `scale-cooldown-seconds` | Wait time between scaling events | `300` |
| `scale-check-interval` | Seconds between metric checks | `30` |
### Scaling Rules
Define custom scaling rules:
```csv
# Scale up when average response time exceeds 2 seconds
scale-rule-response-time,2000
scale-rule-response-action,up
# Scale down when CPU drops below 30%
scale-rule-cpu-low,30
scale-rule-cpu-low-action,down
# Scale up on queue depth
scale-rule-queue-depth,100
scale-rule-queue-action,up
```
## LXC Container Management
### Creating Scaled Instances
```bash
# Create additional botserver containers
for i in {2..5}; do
lxc launch images:debian/12 botserver-$i
lxc config device add botserver-$i port-$((8080+i)) proxy \
listen=tcp:0.0.0.0:$((8080+i)) connect=tcp:127.0.0.1:8080
done
```
### Container Resource Limits
Set resource limits per container:
```bash
# CPU limits (number of cores)
lxc config set botserver-1 limits.cpu 4
# Memory limits
lxc config set botserver-1 limits.memory 8GB
# Disk I/O priority (0-10)
lxc config set botserver-1 limits.disk.priority 5
# Network bandwidth (ingress/egress)
lxc config device set botserver-1 eth0 limits.ingress 100Mbit
lxc config device set botserver-1 eth0 limits.egress 100Mbit
```
### Auto-Scaling Script
Create `/opt/gbo/scripts/autoscale.sh`:
```bash
#!/bin/bash
# Configuration
MIN_INSTANCES=1
MAX_INSTANCES=10
CPU_THRESHOLD=70
SCALE_COOLDOWN=300
LAST_SCALE_FILE="/tmp/last_scale_time"
get_avg_cpu() {
local total=0
local count=0
for container in $(lxc list -c n --format csv | grep "^botserver-"); do
cpu=$(lxc exec $container -- cat /proc/loadavg | awk '{print $1}')
total=$(echo "$total + $cpu" | bc)
count=$((count + 1))
done
echo "scale=2; $total / $count * 100" | bc
}
get_instance_count() {
lxc list -c n --format csv | grep -c "^botserver-"
}
can_scale() {
if [ ! -f "$LAST_SCALE_FILE" ]; then
return 0
fi
last_scale=$(cat "$LAST_SCALE_FILE")
now=$(date +%s)
diff=$((now - last_scale))
[ $diff -gt $SCALE_COOLDOWN ]
}
scale_up() {
current=$(get_instance_count)
if [ $current -ge $MAX_INSTANCES ]; then
echo "Already at max instances ($MAX_INSTANCES)"
return 1
fi
new_id=$((current + 1))
echo "Scaling up: creating botserver-$new_id"
lxc launch images:debian/12 botserver-$new_id
lxc config set botserver-$new_id limits.cpu 4
lxc config set botserver-$new_id limits.memory 8GB
# Copy configuration
lxc file push /opt/gbo/conf/botserver.env botserver-$new_id/opt/gbo/conf/
# Start botserver
lxc exec botserver-$new_id -- /opt/gbo/bin/botserver &
# Update load balancer
update_load_balancer
date +%s > "$LAST_SCALE_FILE"
echo "Scale up complete"
}
scale_down() {
current=$(get_instance_count)
if [ $current -le $MIN_INSTANCES ]; then
echo "Already at min instances ($MIN_INSTANCES)"
return 1
fi
# Remove highest numbered instance
target="botserver-$current"
echo "Scaling down: removing $target"
# Drain connections
lxc exec $target -- /opt/gbo/bin/botserver drain
sleep 30
# Stop and delete
lxc stop $target
lxc delete $target
# Update load balancer
update_load_balancer
date +%s > "$LAST_SCALE_FILE"
echo "Scale down complete"
}
update_load_balancer() {
# Generate upstream list
upstreams=""
for container in $(lxc list -c n --format csv | grep "^botserver-"); do
ip=$(lxc list $container -c 4 --format csv | cut -d' ' -f1)
upstreams="$upstreams\n to $ip:8080"
done
# Update Caddy config
cat > /opt/gbo/conf/caddy/upstream.conf << EOF
upstream botserver {
$upstreams
lb_policy round_robin
health_uri /api/health
health_interval 10s
}
EOF
# Reload Caddy
lxc exec proxy-1 -- caddy reload --config /etc/caddy/Caddyfile
}
# Main loop
while true; do
avg_cpu=$(get_avg_cpu)
echo "Average CPU: $avg_cpu%"
if can_scale; then
if (( $(echo "$avg_cpu > $CPU_THRESHOLD" | bc -l) )); then
scale_up
elif (( $(echo "$avg_cpu < 30" | bc -l) )); then
scale_down
fi
fi
sleep 30
done
```
## Load Balancing
### Caddy Configuration
Primary load balancer configuration (`/opt/gbo/conf/caddy/Caddyfile`):
```caddyfile
{
admin off
auto_https on
}
(common) {
encode gzip zstd
header {
-Server
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
Referrer-Policy "strict-origin-when-cross-origin"
}
}
bot.example.com {
import common
# Health check endpoint (no load balancing)
handle /api/health {
reverse_proxy localhost:8080
}
# WebSocket connections (sticky sessions)
handle /ws* {
reverse_proxy botserver-1:8080 botserver-2:8080 botserver-3:8080 {
lb_policy cookie
lb_try_duration 5s
health_uri /api/health
health_interval 10s
health_timeout 5s
}
}
# API requests (round robin)
handle /api/* {
reverse_proxy botserver-1:8080 botserver-2:8080 botserver-3:8080 {
lb_policy round_robin
lb_try_duration 5s
health_uri /api/health
health_interval 10s
fail_duration 30s
}
}
# Static files (any instance)
handle {
reverse_proxy botserver-1:8080 botserver-2:8080 botserver-3:8080 {
lb_policy first
}
}
}
```
### Load Balancing Policies
| Policy | Description | Use Case |
|--------|-------------|----------|
| `round_robin` | Rotate through backends | General API requests |
| `first` | Use first available | Static content |
| `least_conn` | Fewest active connections | Long-running requests |
| `ip_hash` | Consistent by client IP | Session affinity |
| `cookie` | Sticky sessions via cookie | WebSocket, stateful |
| `random` | Random selection | Testing |
### Rate Limiting
Configure rate limits in `config.csv`:
```csv
# Rate limiting
rate-limit-enabled,true
rate-limit-requests,100
rate-limit-window,60
rate-limit-burst,20
rate-limit-by,ip
# Per-endpoint limits
rate-limit-api-chat,30
rate-limit-api-files,50
rate-limit-api-auth,10
```
Rate limiting in Caddy:
```caddyfile
bot.example.com {
# Global rate limit
rate_limit {
zone global {
key {remote_host}
events 100
window 1m
}
}
# Stricter limit for auth endpoints
handle /api/auth/* {
rate_limit {
zone auth {
key {remote_host}
events 10
window 1m
}
}
reverse_proxy botserver:8080
}
}
```
## Sharding Strategies
### Database Sharding Options
#### Option 1: Tenant-Based Sharding
Each tenant gets their own database:
```
┌─────────────────┐
│ Router/Proxy │
└────────┬────────┘
┌────┴────┬──────────┐
│ │ │
▼ ▼ ▼
┌───────┐ ┌───────┐ ┌───────┐
│Tenant1│ │Tenant2│ │Tenant3│
│ DB │ │ DB │ │ DB │
└───────┘ └───────┘ └───────┘
```
Configuration:
```csv
# Tenant sharding
shard-strategy,tenant
shard-tenant-db-prefix,gb_tenant_
shard-auto-create,true
```
#### Option 2: Hash-Based Sharding
Distribute data by hash of primary key:
```
User ID: 12345
Hash: 12345 % 4 = 1
Shard: shard-1
```
Configuration:
```csv
# Hash sharding
shard-strategy,hash
shard-count,4
shard-key,user_id
shard-algorithm,modulo
```
#### Option 3: Range-Based Sharding
Partition by ID ranges:
```csv
# Range sharding
shard-strategy,range
shard-ranges,0-999999:shard1,1000000-1999999:shard2,2000000-:shard3
```
#### Option 4: Geographic Sharding
Route by user location:
```csv
# Geographic sharding
shard-strategy,geo
shard-geo-us,postgres-us.example.com
shard-geo-eu,postgres-eu.example.com
shard-geo-asia,postgres-asia.example.com
shard-default,postgres-us.example.com
```
### Vector Database Sharding (Qdrant)
Qdrant supports automatic sharding:
```csv
# Qdrant sharding
qdrant-shard-count,4
qdrant-replication-factor,2
qdrant-write-consistency,majority
```
Collection creation with sharding:
```rust
// In vectordb code
let collection_config = CreateCollection {
collection_name: format!("kb_{}", bot_id),
vectors_config: VectorsConfig::Single(VectorParams {
size: 384,
distance: Distance::Cosine,
}),
shard_number: Some(4),
replication_factor: Some(2),
write_consistency_factor: Some(1),
..Default::default()
};
```
### Redis Cluster
For high-availability caching:
```csv
# Redis cluster
cache-mode,cluster
cache-nodes,redis-1:6379,redis-2:6379,redis-3:6379
cache-replicas,1
```
## Failover Systems
### Health Checks
Configure health check endpoints:
```csv
# Health check configuration
health-enabled,true
health-endpoint,/api/health
health-interval,10
health-timeout,5
health-retries,3
```
Health check response:
```json
{
"status": "healthy",
"version": "6.1.0",
"uptime": 86400,
"checks": {
"database": "ok",
"cache": "ok",
"vectordb": "ok",
"llm": "ok"
},
"metrics": {
"cpu": 45.2,
"memory": 62.1,
"connections": 150
}
}
```
### Automatic Failover
#### Database Failover (PostgreSQL)
Using Patroni for PostgreSQL HA:
```yaml
# patroni.yml
scope: botserver-cluster
name: postgres-1
restapi:
listen: 0.0.0.0:8008
connect_address: postgres-1:8008
etcd:
hosts: etcd-1:2379,etcd-2:2379,etcd-3:2379
bootstrap:
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
postgresql:
use_pg_rewind: true
parameters:
max_connections: 200
shared_buffers: 2GB
postgresql:
listen: 0.0.0.0:5432
connect_address: postgres-1:5432
data_dir: /var/lib/postgresql/data
authentication:
superuser:
username: postgres
password: ${POSTGRES_PASSWORD}
replication:
username: replicator
password: ${REPLICATION_PASSWORD}
```
#### Cache Failover (Redis Sentinel)
```csv
# Redis Sentinel configuration
cache-mode,sentinel
cache-sentinel-master,mymaster
cache-sentinel-nodes,sentinel-1:26379,sentinel-2:26379,sentinel-3:26379
```
### Circuit Breaker
Prevent cascade failures:
```csv
# Circuit breaker settings
circuit-breaker-enabled,true
circuit-breaker-threshold,5
circuit-breaker-timeout,30
circuit-breaker-half-open-requests,3
```
States:
- **Closed**: Normal operation
- **Open**: Failing, reject requests immediately
- **Half-Open**: Testing if service recovered
### Graceful Degradation
Configure fallback behavior:
```csv
# Fallback configuration
fallback-llm-enabled,true
fallback-llm-provider,local
fallback-llm-model,DeepSeek-R1-Distill-Qwen-1.5B
fallback-cache-enabled,true
fallback-cache-mode,memory
fallback-vectordb-enabled,true
fallback-vectordb-mode,keyword-search
```
## Monitoring Scaling
### Metrics Collection
Key metrics to monitor:
```csv
# Scaling metrics
metrics-scaling-enabled,true
metrics-container-count,true
metrics-scaling-events,true
metrics-load-distribution,true
```
### Alerting Rules
Configure alerts for scaling issues:
```yaml
# alerting-rules.yml
groups:
- name: scaling
rules:
- alert: HighCPUUsage
expr: avg(cpu_usage) > 80
for: 5m
labels:
severity: warning
annotations:
summary: "High CPU usage detected"
- alert: MaxInstancesReached
expr: container_count >= max_instances
for: 1m
labels:
severity: critical
annotations:
summary: "Maximum instances reached, cannot scale up"
- alert: ScalingFailed
expr: scaling_errors > 0
for: 1m
labels:
severity: critical
annotations:
summary: "Scaling operation failed"
```
## Best Practices
### Scaling
1. **Start small** - Begin with auto-scaling disabled, monitor patterns first
2. **Set appropriate thresholds** - Too low causes thrashing, too high causes poor performance
3. **Use cooldown periods** - Prevent rapid scale up/down cycles
4. **Test failover** - Regularly test your failover procedures
5. **Monitor costs** - More instances = higher infrastructure costs
### Load Balancing
1. **Use sticky sessions for WebSockets** - Required for real-time features
2. **Enable health checks** - Remove unhealthy instances automatically
3. **Configure timeouts** - Prevent hanging connections
4. **Use connection pooling** - Reduce connection overhead
### Sharding
1. **Choose the right strategy** - Tenant-based is simplest for SaaS
2. **Plan for rebalancing** - Have procedures to move data between shards
3. **Avoid cross-shard queries** - Design to minimize these
4. **Monitor shard balance** - Uneven distribution causes hotspots
## Next Steps
- [Container Deployment](./containers.md) - LXC container basics
- [Architecture Overview](./architecture.md) - System design
- [Monitoring Dashboard](../chapter-04-gbui/monitoring.md) - Observe your cluster

View file

@ -0,0 +1,543 @@
# Secrets Management
General Bots uses a layered approach to configuration and secrets management. The goal is to keep `.env` **minimal** - containing only Vault connection info - while all sensitive data is stored securely in Vault.
## Configuration Layers
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ Configuration Hierarchy │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌───────────┐ │
│ │ .env │ │ Zitadel │ │ Vault │ │config.csv │ │
│ │(Vault ONLY) │ │ (Identity) │ │ (Secrets) │ │(Bot Config)│ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ └─────┬─────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ • VAULT_ADDR • User accounts • Directory URL • Bot params │
│ • VAULT_TOKEN • Organizations • Database creds • LLM config │
│ • Projects • API keys • Features │
│ • Applications • Drive credentials • Behavior │
│ • MFA settings • Encryption keys │
│ • SSO/OAuth • ALL service secrets │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
```
## What Goes Where?
### .env (Vault Connection ONLY)
The `.env` file should contain **ONLY** Vault connection info:
```bash
# .env - ONLY Vault connection
# Everything else comes from Vault!
VAULT_ADDR=https://localhost:8200
VAULT_TOKEN=hvs.your-root-token
```
That's it. **Two variables only.**
**Why so minimal?**
- `.env` files can be accidentally committed to git
- Environment variables may appear in logs
- Reduces attack surface if server is compromised
- Single point of secret management (Vault)
- Easy rotation - change in Vault, not in files
### Zitadel (Identity & Access)
Zitadel manages **user-facing** identity:
| What | Example |
|------|---------|
| User accounts | john@example.com |
| Organizations | Acme Corp |
| Projects | Production Bot |
| Applications | Web UI, Mobile App |
| MFA settings | TOTP, SMS, WebAuthn |
| SSO providers | Google, Microsoft |
| User metadata | Department, Role |
**Not stored in Zitadel:**
- Service passwords
- API keys
- Encryption keys
### Vault (Service Secrets)
Vault manages **machine-to-machine** secrets:
| Path | Contents |
|------|----------|
| `gbo/drive` | MinIO access key and secret |
| `gbo/tables` | PostgreSQL username and password |
| `gbo/cache` | Redis password |
| `gbo/llm` | OpenAI, Anthropic, Groq API keys |
| `gbo/encryption` | Master encryption key, data keys |
| `gbo/email` | SMTP credentials |
| `gbo/meet` | LiveKit API key and secret |
| `gbo/alm` | Forgejo admin password, runner token |
### config.csv (Bot Configuration)
The bot's `config.csv` contains **non-sensitive** configuration:
```csv
# Bot behavior - NOT secrets
llm-provider,openai
llm-model,gpt-4o
llm-temperature,0.7
llm-max-tokens,4096
# Feature flags
feature-voice-enabled,true
feature-file-upload,true
# Vault references for sensitive values
llm-api-key,vault:gbo/llm/openai_key
```
Note: Most service credentials (database, drive, cache) are fetched automatically from Vault at startup. You only need `vault:` references in config.csv for bot-specific secrets like LLM API keys.
## How Secrets Flow
### At Startup
```
1. BotServer starts
2. Reads .env for VAULT_ADDR and VAULT_TOKEN (only 2 variables)
3. Connects to Vault
4. Fetches ALL service credentials:
- gbo/directory → Zitadel URL, client_id, client_secret
- gbo/tables → Database host, port, username, password
- gbo/drive → MinIO endpoint, accesskey, secret
- gbo/cache → Redis host, port, password
- gbo/llm → API keys for all providers
- gbo/encryption → Master encryption keys
5. Connects to all services using Vault credentials
6. Reads config.csv for bot configuration
7. For keys referencing Vault (vault:path/key):
- Fetches from Vault automatically
8. System ready
```
### At Runtime
```
1. User sends message
2. Bot processes, needs LLM
3. Reads config.csv: llm-api-key = vault:gbo/llm/openai_key
4. Fetches from Vault (cached for performance)
5. Calls OpenAI API
6. Returns response
```
## Setting Up Vault
### Initial Setup
When you run `./botserver install secrets`, it:
1. Downloads and installs Vault
2. Initializes with a single unseal key
3. Creates initial secret paths
4. Outputs root token to `conf/vault/init.json`
```bash
# Check Vault status
./botserver status secrets
# View init credentials (protect this file!)
cat botserver-stack/conf/vault/init.json
```
### Storing Secrets
Use the Vault CLI or API:
```bash
# Directory (Zitadel) - includes URL, no longer in .env
vault kv put gbo/directory \
url=https://localhost:8080 \
project_id=your-project-id \
client_id=your-client-id \
client_secret=your-client-secret
# Database - includes host/port, no longer in .env
vault kv put gbo/tables \
host=localhost \
port=5432 \
database=botserver \
username=gbuser \
password=secure-password
# Drive (MinIO)
vault kv put gbo/drive \
endpoint=https://localhost:9000 \
accesskey=minioadmin \
secret=minioadmin123
# Cache (Redis)
vault kv put gbo/cache \
host=localhost \
port=6379 \
password=redis-secret
# LLM API keys
vault kv put gbo/llm \
openai_key=sk-xxxxx \
anthropic_key=sk-ant-xxxxx \
groq_key=gsk_xxxxx \
deepseek_key=sk-xxxxx
# Encryption keys
vault kv put gbo/encryption \
master_key=your-32-byte-key
# Vector database (Qdrant)
vault kv put gbo/vectordb \
url=https://localhost:6334 \
api_key=optional-api-key
# Observability (InfluxDB)
vault kv put gbo/observability \
url=http://localhost:8086 \
org=pragmatismo \
bucket=metrics \
token=your-influx-token
```
### Automatic Management
**Secrets are managed automatically** - you don't need a UI for day-to-day operations:
| Action | How It Works |
|--------|--------------|
| Service startup | Fetches credentials from Vault |
| Key rotation | Update in Vault, services reload |
| New bot deployment | Inherits organization secrets |
| LLM provider change | Update config.csv, key fetched automatically |
### Emergency Access
For emergency situations (lost credentials, key rotation), admins can:
1. **Access Vault UI**: `https://localhost:8200/ui`
2. **Use Vault CLI**: `vault kv get gbo/llm`
3. **Check init.json**: Contains unseal key and root token
```bash
# Emergency: unseal Vault after restart
UNSEAL_KEY=$(cat botserver-stack/conf/vault/init.json | jq -r '.unseal_keys_b64[0]')
vault operator unseal $UNSEAL_KEY
```
## Migrating from Environment Variables
If you're currently using environment variables:
### Before (Old Way)
```bash
# .env - TOO MANY SECRETS!
DATABASE_URL=postgres://user:password@localhost/db
DIRECTORY_URL=https://localhost:8080
DIRECTORY_PROJECT_ID=12345
REDIS_PASSWORD=redis-secret
OPENAI_API_KEY=sk-xxxxx
ANTHROPIC_API_KEY=sk-ant-xxxxx
DRIVE_ACCESSKEY=minio
DRIVE_SECRET=minio123
ENCRYPTION_KEY=super-secret-key
```
### After (With Vault)
```bash
# .env - ONLY VAULT CONNECTION
VAULT_ADDR=https://localhost:8200
VAULT_TOKEN=hvs.xxxxx
```
```bash
# EVERYTHING in Vault
vault kv put gbo/directory \
url=https://localhost:8080 \
project_id=12345 \
client_id=xxx \
client_secret=xxx
vault kv put gbo/tables \
host=localhost \
port=5432 \
database=botserver \
username=user \
password=password
vault kv put gbo/cache \
host=localhost \
port=6379 \
password=redis-secret
vault kv put gbo/llm \
openai_key=sk-xxxxx \
anthropic_key=sk-ant-xxxxx
vault kv put gbo/drive \
endpoint=https://localhost:9000 \
accesskey=minio \
secret=minio123
vault kv put gbo/encryption \
master_key=super-secret-key
```
### Migration Script
```bash
#!/bin/bash
# migrate-to-vault.sh
# Read existing .env
source .env
# Parse DATABASE_URL if present
if [ -n "$DATABASE_URL" ]; then
# postgres://user:pass@host:port/db
DB_USER=$(echo $DATABASE_URL | sed -n 's|postgres://\([^:]*\):.*|\1|p')
DB_PASS=$(echo $DATABASE_URL | sed -n 's|postgres://[^:]*:\([^@]*\)@.*|\1|p')
DB_HOST=$(echo $DATABASE_URL | sed -n 's|.*@\([^:]*\):.*|\1|p')
DB_PORT=$(echo $DATABASE_URL | sed -n 's|.*:\([0-9]*\)/.*|\1|p')
DB_NAME=$(echo $DATABASE_URL | sed -n 's|.*/\(.*\)|\1|p')
fi
# Store everything in Vault
vault kv put gbo/directory \
url="${DIRECTORY_URL:-https://localhost:8080}" \
project_id="${DIRECTORY_PROJECT_ID:-}" \
client_id="${ZITADEL_CLIENT_ID:-}" \
client_secret="${ZITADEL_CLIENT_SECRET:-}"
vault kv put gbo/tables \
host="${DB_HOST:-localhost}" \
port="${DB_PORT:-5432}" \
database="${DB_NAME:-botserver}" \
username="${DB_USER:-gbuser}" \
password="${DB_PASS:-}"
vault kv put gbo/cache \
host="${REDIS_HOST:-localhost}" \
port="${REDIS_PORT:-6379}" \
password="${REDIS_PASSWORD:-}"
vault kv put gbo/llm \
openai_key="${OPENAI_API_KEY:-}" \
anthropic_key="${ANTHROPIC_API_KEY:-}" \
groq_key="${GROQ_API_KEY:-}" \
deepseek_key="${DEEPSEEK_API_KEY:-}"
vault kv put gbo/drive \
endpoint="${DRIVE_ENDPOINT:-https://localhost:9000}" \
accesskey="${DRIVE_ACCESSKEY:-}" \
secret="${DRIVE_SECRET:-}"
vault kv put gbo/encryption \
master_key="${ENCRYPTION_KEY:-}"
# Clean up .env - ONLY Vault connection
cat > .env << EOF
# General Bots - Vault Connection Only
# All other secrets are stored in Vault
VAULT_ADDR=https://localhost:8200
VAULT_TOKEN=$VAULT_TOKEN
EOF
echo "Migration complete!"
echo ".env now contains only Vault connection."
echo "All secrets moved to Vault."
```
## Using Vault References in config.csv
Reference Vault secrets in your bot's config.csv:
```csv
# Direct value (non-sensitive)
llm-provider,openai
llm-model,gpt-4o
llm-temperature,0.7
# Vault reference (sensitive)
llm-api-key,vault:gbo/llm/openai_key
# Multiple keys from same path
drive-accesskey,vault:gbo/drive/accesskey
drive-secret,vault:gbo/drive/secret
# Per-bot secrets (for multi-tenant)
custom-api-key,vault:gbo/bots/mybot/api_key
```
### Syntax
```
vault:<path>/<key>
```
- `path`: Vault KV path (e.g., `gbo/llm`)
- `key`: Specific key within the secret (e.g., `openai_key`)
## Security Best Practices
### 1. Protect init.json
```bash
# Set restrictive permissions
chmod 600 botserver-stack/conf/vault/init.json
# Consider encrypting or moving off-server
gpg -c init.json
scp init.json.gpg secure-backup-server:
rm init.json
```
### 2. Use Token Policies
Create limited tokens for applications:
```hcl
# gbo-readonly.hcl
path "gbo/*" {
capabilities = ["read", "list"]
}
```
```bash
vault policy write gbo-readonly gbo-readonly.hcl
vault token create -policy=gbo-readonly -ttl=24h
```
### 3. Enable Audit Logging
```bash
vault audit enable file file_path=/opt/gbo/logs/vault-audit.log
```
### 4. Rotate Secrets Regularly
```bash
# Rotate LLM keys
vault kv put gbo/llm \
openai_key=sk-new-key \
anthropic_key=sk-ant-new-key
# BotServer will pick up new keys automatically (cache TTL)
```
### 5. Backup Vault Data
```bash
# Snapshot Vault data
vault operator raft snapshot save backup.snap
# Or backup the data directory
tar -czf vault-backup.tar.gz botserver-stack/data/vault/
```
## No UI Needed
**You don't need to expose a UI for secrets management** because:
1. **Automatic at runtime**: Secrets are fetched automatically
2. **config.csv for changes**: Update bot config, not secrets
3. **Vault UI for emergencies**: Available at `https://localhost:8200/ui`
4. **CLI for automation**: Scripts can manage secrets
### When Admins Need Access
| Situation | Solution |
|-----------|----------|
| Add new LLM provider | `vault kv put gbo/llm new_key=xxx` |
| Rotate compromised key | Update in Vault, services auto-reload |
| Check what's stored | `vault kv get gbo/llm` or Vault UI |
| Debug connection issues | Check Vault logs and service logs |
| Disaster recovery | Use init.json to unseal and recover |
## Relationship Summary
```
┌─────────────────────────────────────────────────────────────────┐
│ .env │
│ VAULT_ADDR + VAULT_TOKEN (only!) │
└─────────────────────────────┬───────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Vault │
│ "Give me all service credentials and connection info" │
│ │
│ gbo/directory → Zitadel URL, credentials │
│ gbo/tables → Database connection + credentials │
│ gbo/drive → MinIO endpoint + credentials │
│ gbo/cache → Redis connection + password │
│ gbo/llm → All LLM API keys │
└─────────────────────────────┬───────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ BotServer │
│ Connects to all services using Vault secrets │
└─────────────────────────────┬───────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ User Request │
└─────────────────────────────┬───────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Zitadel │
│ "Who is this user? Are they allowed?" │
│ (Credentials from Vault at startup) │
└─────────────────────────────┬───────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ config.csv │
│ "What LLM should I use? What model?" │
│ (Non-sensitive bot configuration) │
└─────────────────────────────┬───────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ LLM Provider │
│ (API key from Vault at startup) │
└─────────────────────────────────────────────────────────────────┘
```
## Vault Paths Reference
| Path | Contents |
|------|----------|
| `gbo/directory` | url, project_id, client_id, client_secret |
| `gbo/tables` | host, port, database, username, password |
| `gbo/drive` | endpoint, accesskey, secret |
| `gbo/cache` | host, port, password |
| `gbo/llm` | openai_key, anthropic_key, groq_key, deepseek_key, mistral_key |
| `gbo/encryption` | master_key, data_key |
| `gbo/email` | host, username, password |
| `gbo/meet` | url, api_key, api_secret |
| `gbo/alm` | url, admin_password, runner_token |
| `gbo/vectordb` | url, api_key |
| `gbo/observability` | url, org, bucket, token |
## Next Steps
- [config.csv Format](./config-csv.md) - Bot configuration reference
- [LLM Configuration](./llm-config.md) - LLM-specific settings
- [Infrastructure Design](../chapter-07-gbapp/infrastructure.md) - Full architecture

View file

@ -1,6 +1,85 @@
# Introduction to General Bots
**Build conversational AI bots in minutes, not months.** General Bots lets you create intelligent chatbots by writing simple [BASIC scripts](./chapter-02/gbdialog.md) and dropping in your [documents](./chapter-02/gbkb.md). No complex frameworks, no cloud dependencies, no AI expertise required.
**Build conversational AI bots in minutes, not months.** General Bots lets you create intelligent chatbots by writing simple [BASIC scripts](./chapter-06-gbdialog/basics.md) and dropping in your [documents](./chapter-02/gbkb.md). No complex frameworks, no cloud dependencies, no AI expertise required.
## The No Forms Movement
<img src="./assets/general-bots-2017.svg" alt="General Bots in 2017" style="max-height: 300px; width: 100%; object-fit: contain;">
Since 2017, Pragmatismo has championed the **No Forms** philosophy. The idea is simple but revolutionary:
> **People should converse, not fill forms.**
Traditional software forces users into rigid forms with dropdowns, checkboxes, and validation errors. But humans don't communicate that way. We talk. We explain. We ask questions.
General Bots was born from this vision: **replace forms with conversations**.
### Before: The Form Experience
```
┌─────────────────────────────────────────┐
│ Customer Support Form │
├─────────────────────────────────────────┤
│ Name: [_______________] │
│ Email: [_______________] │
│ Department: [Select ▼] │
│ Priority: ○ Low ○ Medium ○ High │
│ Subject: [_______________] │
│ Description: │
│ [ ] │
│ [ ] │
│ │
│ Attachments: [Choose File] │
│ │
│ [Submit] │
│ │
│ ⚠️ Error: Email format invalid │
│ ⚠️ Error: Description required │
└─────────────────────────────────────────┘
```
**Problems:**
- Intimidating for users
- Requires learning the interface
- Validation errors frustrate
- No guidance or context
- One-size-fits-all approach
### After: The Conversation Experience
```
User: I need help with my order
Bot: I'd be happy to help! What's your order number?
User: It's 12345
Bot: Found it - your laptop order from last week. What's the issue?
User: It arrived damaged
Bot: I'm sorry to hear that. I'll create a return label for you.
Should I send it to your email on file?
User: Yes please
Bot: Done! Check your inbox. Is there anything else?
```
**Benefits:**
- Natural and intuitive
- Guides users step by step
- Adapts to each situation
- No errors, just clarifications
- Feels like talking to a human
### Projections, Not Screens
The No Forms philosophy extends beyond chat. In General Bots:
- **Visualizations replace dashboards** - Data is projected contextually, not displayed in static grids
- **Conversations replace menus** - Ask for what you need, don't hunt through options
- **AI handles complexity** - The system adapts, users don't configure
- **Voice-first design** - Everything works without a screen
This is why General Bots focuses on:
1. **Conversational interfaces** - Chat, voice, messaging
2. **Contextual projections** - Show relevant info when needed
3. **Minimal UI** - The less interface, the better
4. **AI interpretation** - Understand intent, not just input
## Quick Example
@ -29,22 +108,18 @@ User: Sarah Chen
Bot: Welcome to Computer Science, Sarah!
```
### The Flow
<img src="./assets/conversation-flow.svg" alt="Conversation Flow" style="max-height: 400px; width: 100%; object-fit: contain;">
The AI handles everything else - understanding intent, collecting information, executing tools, answering from documents. Zero configuration.
No form. No UI. Just conversation.
## What Makes General Bots Different
### [Just Run It](./chapter-01/quick-start.md)
### Just Run It
```bash
./botserver
```
That's it. No Kubernetes, no cloud accounts. The [bootstrap process](./chapter-01/installation.md) installs everything locally in 2-5 minutes. [PostgreSQL](./chapter-07/postgresql.md), [vector database](./chapter-03/vector-collections.md), [object storage](./chapter-07/minio.md), [cache](./chapter-03/caching.md) - all configured automatically with secure credentials.
That's it. No Kubernetes, no cloud accounts. The [bootstrap process](./chapter-01/installation.md) installs everything locally in 2-5 minutes. PostgreSQL, vector database, object storage, cache - all configured automatically with secure credentials stored in Vault.
### Real BASIC, Real Simple
We brought BASIC back for conversational AI. See our [complete keyword reference](./chapter-05/README.md):
We brought BASIC back for conversational AI. See our [complete keyword reference](./chapter-06-gbdialog/keywords.md):
```basic
' save-note.bas - A simple tool
PARAM topic, content
@ -64,9 +139,9 @@ Create `.bas` files that the AI discovers and calls automatically. Need to save
General Bots is a single binary that includes everything:
<img src="./assets/architecture.svg" alt="General Bots Architecture" style="max-height: 400px; width: 100%; object-fit: contain;">
<img src="./assets/architecture-overview.svg" alt="General Bots Architecture" style="max-height: 400px; width: 100%; object-fit: contain;">
One process, one port, one command to run. Deploy anywhere - laptop, server, container.
One process, one port, one command to run. Deploy anywhere - laptop, server, LXC container.
## Real-World Use Cases
@ -113,11 +188,7 @@ my-bot.gbai/ # Package root
└── config.csv # Bot settings
```
### How It Works
<img src="./assets/package-system-flow.svg" alt="Package System Flow" style="max-height: 400px; width: 100%; object-fit: contain;">
That's it. No XML, no JSON schemas, no build process. Copy the folder to deploy.
Copy the folder to deploy. That's it. No XML, no JSON schemas, no build process.
## Getting Started in 3 Steps
@ -138,28 +209,42 @@ The default bot is ready. Ask it anything. Modify `templates/default.gbai/` to c
## Core Philosophy
1. **Simplicity First** - If it needs documentation, it's too complex
2. **Everything Included** - No external dependencies to manage
3. **Production Ready** - Secure, scalable, enterprise-grade from day one
4. **Developer Friendly** - Clear errors, hot reload, great debugging
1. **No Forms** - Conversations replace forms everywhere
2. **Simplicity First** - If it needs documentation, it's too complex
3. **Everything Included** - No external dependencies to manage
4. **Production Ready** - Secure, scalable, enterprise-grade from day one
5. **AI Does the Work** - Don't write logic the LLM can handle
6. **Projections Over Screens** - Show data contextually, not in dashboards
## Technical Highlights
- **Language**: Written in Rust for performance and safety
- **Database**: PostgreSQL with Diesel ORM
- **Cache**: Valkey (Redis-compatible) for sessions
- **Storage**: S3-compatible object store
- **Storage**: S3-compatible object store (MinIO)
- **Vectors**: Qdrant for semantic search
- **Security**: Argon2 passwords, AES encryption
- **LLM**: OpenAI API or local models
- **Security**: Vault for secrets, Argon2 passwords, AES encryption
- **Identity**: Zitadel for authentication and MFA
- **LLM**: OpenAI API, Anthropic, Groq, or local models
- **Scripting**: Rhai-powered BASIC interpreter
## A Brief History
**2017** - Pragmatismo launches General Bots with the No Forms manifesto. The vision: conversational interfaces should replace traditional forms in enterprise software.
**2018-2020** - Node.js implementation gains traction. Hundreds of bots deployed across banking, healthcare, education, and government sectors in Brazil and beyond.
**2021-2023** - Major enterprises adopt General Bots for customer service automation. The platform handles millions of conversations.
**2024** - Complete rewrite in Rust for performance, security, and reliability. Version 6.0 introduces the new architecture with integrated services.
**Today** - General Bots powers conversational AI for organizations worldwide, staying true to the original vision: **people should converse, not fill forms**.
## What's Next?
- **[Chapter 01](./chapter-01/README.md)** - Install and run your first bot
- **[Chapter 02](./chapter-02/README.md)** - Understanding packages
- **[Chapter 05](./chapter-05/README.md)** - Writing BASIC dialogs
- **[Chapter 06](./chapter-06-gbdialog/README.md)** - Writing BASIC dialogs
- **[Templates](./chapter-02/templates.md)** - Explore example bots
## Community
@ -167,12 +252,15 @@ The default bot is ready. Ask it anything. Modify `templates/default.gbai/` to c
General Bots is open source (AGPL-3.0) developed by Pragmatismo.com.br and contributors worldwide.
- **GitHub**: https://github.com/GeneralBots/botserver
- **Version**: 6.0.8
- **Version**: 6.1.0
- **Status**: Production Ready
Ready to build your bot? Turn to [Chapter 01](./chapter-01/README.md) and let's go!
---
<div align="center">
<img src="https://pragmatismo.com.br/icons/general-bots-text.svg" alt="General Bots" width="200">
<img src="./assets/general-bots-logo.svg" alt="General Bots" width="200">
<br>
<em>Built with ❤️ from Brazil since 2017</em>
</div>

View file

@ -1,3 +1,16 @@
//! ADD SUGGESTION Keyword
//!
//! Provides suggestions/quick replies in conversations.
//! Suggestions can:
//! - Point to KB contexts (existing behavior)
//! - Start tools with optional parameters
//! - When clicked, tools without params will prompt for params first
//!
//! Syntax:
//! - ADD SUGGESTION "context" AS "button text" - Points to KB context
//! - ADD SUGGESTION TOOL "tool_name" AS "button text" - Starts a tool
//! - ADD SUGGESTION TOOL "tool_name" WITH param1, param2 AS "button text" - Tool with params
use crate::shared::models::UserSession;
use crate::shared::state::AppState;
use log::{error, trace};
@ -5,6 +18,19 @@ use rhai::{Dynamic, Engine};
use serde_json::json;
use std::sync::Arc;
/// Suggestion types
#[derive(Debug, Clone)]
pub enum SuggestionType {
/// Points to a KB context - when clicked, selects that context
Context(String),
/// Starts a tool - when clicked, invokes the tool
/// If tool has required params and none provided, will prompt user first
Tool {
name: String,
params: Option<Vec<String>>,
},
}
pub fn clear_suggestions_keyword(
state: Arc<AppState>,
user_session: UserSession,
@ -53,8 +79,13 @@ pub fn add_suggestion_keyword(
engine: &mut Engine,
) {
let cache = state.cache.clone();
let cache2 = state.cache.clone();
let cache3 = state.cache.clone();
let user_session2 = user_session.clone();
let user_session3 = user_session.clone();
// Register with spaces: ADD SUGGESTION "key" AS "text"
// Register: ADD SUGGESTION "context" AS "text"
// Points to KB context - when clicked, selects that context for queries
engine
.register_custom_syntax(
&["ADD", "SUGGESTION", "$expr$", "AS", "$expr$"],
@ -63,58 +94,259 @@ pub fn add_suggestion_keyword(
let context_name = context.eval_expression_tree(&inputs[0])?.to_string();
let button_text = context.eval_expression_tree(&inputs[1])?.to_string();
if let Some(cache_client) = &cache {
let redis_key =
format!("suggestions:{}:{}", user_session.user_id, user_session.id);
let suggestion = json!({ "context": context_name, "text": button_text });
add_context_suggestion(&cache, &user_session, &context_name, &button_text)?;
let mut conn = match cache_client.get_connection() {
Ok(conn) => conn,
Err(e) => {
error!("Failed to connect to cache: {}", e);
return Ok(Dynamic::UNIT);
}
};
Ok(Dynamic::UNIT)
},
)
.unwrap();
let result: Result<i64, redis::RedisError> = redis::cmd("RPUSH")
.arg(&redis_key)
.arg(suggestion.to_string())
.query(&mut conn);
// Register: ADD SUGGESTION TOOL "tool_name" AS "text"
// Starts a tool - if tool requires params, will prompt user first
engine
.register_custom_syntax(
&["ADD", "SUGGESTION", "TOOL", "$expr$", "AS", "$expr$"],
true,
move |context, inputs| {
let tool_name = context.eval_expression_tree(&inputs[0])?.to_string();
let button_text = context.eval_expression_tree(&inputs[1])?.to_string();
match result {
Ok(length) => {
trace!(
"Added suggestion to session {}, total suggestions: {}",
user_session.id,
length
);
add_tool_suggestion(&cache2, &user_session2, &tool_name, None, &button_text)?;
let active_key = format!(
"active_context:{}:{}",
user_session.user_id, user_session.id
);
Ok(Dynamic::UNIT)
},
)
.unwrap();
let hset_result: Result<i64, redis::RedisError> = redis::cmd("HSET")
.arg(&active_key)
.arg(&context_name)
.arg("inactive")
.query(&mut conn);
// Register: ADD SUGGESTION TOOL "tool_name" WITH params AS "text"
// Starts a tool with pre-filled parameters
engine
.register_custom_syntax(
&[
"ADD",
"SUGGESTION",
"TOOL",
"$expr$",
"WITH",
"$expr$",
"AS",
"$expr$",
],
true,
move |context, inputs| {
let tool_name = context.eval_expression_tree(&inputs[0])?.to_string();
let params_value = context.eval_expression_tree(&inputs[1])?;
let button_text = context.eval_expression_tree(&inputs[2])?.to_string();
match hset_result {
Ok(_fields_added) => {
trace!("Set context state for session {}", user_session.id);
}
Err(e) => error!("Failed to set context state: {}", e),
}
}
Err(e) => error!("Failed to add suggestion to Redis: {}", e),
}
// Parse params - can be array or comma-separated string
let params = if params_value.is_array() {
params_value
.cast::<rhai::Array>()
.iter()
.map(|v| v.to_string())
.collect()
} else {
trace!("No cache configured, suggestion not added");
}
params_value
.to_string()
.split(',')
.map(|s| s.trim().to_string())
.collect()
};
add_tool_suggestion(
&cache3,
&user_session3,
&tool_name,
Some(params),
&button_text,
)?;
Ok(Dynamic::UNIT)
},
)
.unwrap();
}
/// Add a context-based suggestion (points to KB)
fn add_context_suggestion(
cache: &Option<redis::Client>,
user_session: &UserSession,
context_name: &str,
button_text: &str,
) -> Result<(), Box<rhai::EvalAltResult>> {
if let Some(cache_client) = cache {
let redis_key = format!("suggestions:{}:{}", user_session.user_id, user_session.id);
// Suggestion JSON includes type for client to handle appropriately
let suggestion = json!({
"type": "context",
"context": context_name,
"text": button_text,
"action": {
"type": "select_context",
"context": context_name
}
});
let mut conn = match cache_client.get_connection() {
Ok(conn) => conn,
Err(e) => {
error!("Failed to connect to cache: {}", e);
return Ok(());
}
};
let result: Result<i64, redis::RedisError> = redis::cmd("RPUSH")
.arg(&redis_key)
.arg(suggestion.to_string())
.query(&mut conn);
match result {
Ok(length) => {
trace!(
"Added context suggestion '{}' to session {}, total: {}",
context_name,
user_session.id,
length
);
// Set context state
let active_key = format!(
"active_context:{}:{}",
user_session.user_id, user_session.id
);
let _: Result<i64, redis::RedisError> = redis::cmd("HSET")
.arg(&active_key)
.arg(context_name)
.arg("inactive")
.query(&mut conn);
}
Err(e) => error!("Failed to add suggestion to Redis: {}", e),
}
} else {
trace!("No cache configured, suggestion not added");
}
Ok(())
}
/// Add a tool-based suggestion
/// When clicked:
/// - If params provided, executes tool immediately with those params
/// - If no params and tool has required params, prompts user for them first
fn add_tool_suggestion(
cache: &Option<redis::Client>,
user_session: &UserSession,
tool_name: &str,
params: Option<Vec<String>>,
button_text: &str,
) -> Result<(), Box<rhai::EvalAltResult>> {
if let Some(cache_client) = cache {
let redis_key = format!("suggestions:{}:{}", user_session.user_id, user_session.id);
// Suggestion JSON for tool invocation
let suggestion = json!({
"type": "tool",
"tool": tool_name,
"text": button_text,
"action": {
"type": "invoke_tool",
"tool": tool_name,
"params": params,
// If params is None, client should check tool schema
// and prompt for required params before invoking
"prompt_for_params": params.is_none()
}
});
let mut conn = match cache_client.get_connection() {
Ok(conn) => conn,
Err(e) => {
error!("Failed to connect to cache: {}", e);
return Ok(());
}
};
let result: Result<i64, redis::RedisError> = redis::cmd("RPUSH")
.arg(&redis_key)
.arg(suggestion.to_string())
.query(&mut conn);
match result {
Ok(length) => {
trace!(
"Added tool suggestion '{}' to session {}, total: {}, has_params: {}",
tool_name,
user_session.id,
length,
params.is_some()
);
}
Err(e) => error!("Failed to add tool suggestion to Redis: {}", e),
}
} else {
trace!("No cache configured, tool suggestion not added");
}
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_suggestion_json_context() {
let suggestion = json!({
"type": "context",
"context": "products",
"text": "View Products",
"action": {
"type": "select_context",
"context": "products"
}
});
assert_eq!(suggestion["type"], "context");
assert_eq!(suggestion["action"]["type"], "select_context");
}
#[test]
fn test_suggestion_json_tool_no_params() {
let suggestion = json!({
"type": "tool",
"tool": "search_kb",
"text": "Search Knowledge Base",
"action": {
"type": "invoke_tool",
"tool": "search_kb",
"params": Option::<Vec<String>>::None,
"prompt_for_params": true
}
});
assert_eq!(suggestion["type"], "tool");
assert_eq!(suggestion["action"]["prompt_for_params"], true);
}
#[test]
fn test_suggestion_json_tool_with_params() {
let params = vec!["query".to_string(), "products".to_string()];
let suggestion = json!({
"type": "tool",
"tool": "search_kb",
"text": "Search Products",
"action": {
"type": "invoke_tool",
"tool": "search_kb",
"params": params,
"prompt_for_params": false
}
});
assert_eq!(suggestion["type"], "tool");
assert_eq!(suggestion["action"]["prompt_for_params"], false);
assert!(suggestion["action"]["params"].is_array());
}
}

View file

@ -0,0 +1,467 @@
//! KB Statistics Keywords
//!
//! Provides keywords for querying Qdrant vector database statistics.
//! Used for monitoring and managing knowledge base collections.
use crate::shared::models::UserSession;
use crate::shared::state::AppState;
use log::{error, info, trace};
use rhai::{Dynamic, Engine, EvalAltResult};
use serde::{Deserialize, Serialize};
use std::sync::Arc;
/// Statistics for a single collection
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CollectionStats {
pub name: String,
pub vectors_count: u64,
pub points_count: u64,
pub segments_count: u64,
pub disk_data_size: u64,
pub ram_data_size: u64,
pub indexed_vectors_count: u64,
pub status: String,
}
/// Aggregated statistics across collections
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct KBStatistics {
pub total_collections: u64,
pub total_documents: u64,
pub total_vectors: u64,
pub total_disk_size_mb: f64,
pub total_ram_size_mb: f64,
pub documents_added_last_week: u64,
pub documents_added_last_month: u64,
pub collections: Vec<CollectionStats>,
}
/// Register KB STATISTICS keyword
pub fn kb_statistics_keyword(state: Arc<AppState>, user: UserSession, engine: &mut Engine) {
let state_clone = Arc::clone(&state);
let user_clone = user.clone();
// KB STATISTICS - Get overall statistics
engine.register_fn("KB STATISTICS", move || -> Dynamic {
let state = Arc::clone(&state_clone);
let user = user_clone.clone();
trace!(
"KB STATISTICS called for bot {} by user {}",
user.bot_id,
user.user_id
);
let rt = tokio::runtime::Handle::try_current();
if rt.is_err() {
error!("KB STATISTICS: No tokio runtime available");
return Dynamic::UNIT;
}
let result = rt.unwrap().block_on(async {
get_kb_statistics(&state, &user).await
});
match result {
Ok(stats) => {
match serde_json::to_value(&stats) {
Ok(json) => Dynamic::from(json.to_string()),
Err(e) => {
error!("Failed to serialize KB statistics: {}", e);
Dynamic::UNIT
}
}
}
Err(e) => {
error!("Failed to get KB statistics: {}", e);
Dynamic::UNIT
}
}
});
// KB COLLECTION STATS collection_name - Get stats for specific collection
let state_clone2 = Arc::clone(&state);
let user_clone2 = user.clone();
engine.register_fn("KB COLLECTION STATS", move |collection_name: &str| -> Dynamic {
let state = Arc::clone(&state_clone2);
let user = user_clone2.clone();
trace!(
"KB COLLECTION STATS called for collection '{}' bot {} by user {}",
collection_name,
user.bot_id,
user.user_id
);
let rt = tokio::runtime::Handle::try_current();
if rt.is_err() {
error!("KB COLLECTION STATS: No tokio runtime available");
return Dynamic::UNIT;
}
let collection = collection_name.to_string();
let result = rt.unwrap().block_on(async {
get_collection_statistics(&state, &collection).await
});
match result {
Ok(stats) => {
match serde_json::to_value(&stats) {
Ok(json) => Dynamic::from(json.to_string()),
Err(e) => {
error!("Failed to serialize collection statistics: {}", e);
Dynamic::UNIT
}
}
}
Err(e) => {
error!("Failed to get collection statistics: {}", e);
Dynamic::UNIT
}
}
});
// KB DOCUMENTS COUNT - Get total document count for bot
let state_clone3 = Arc::clone(&state);
let user_clone3 = user.clone();
engine.register_fn("KB DOCUMENTS COUNT", move || -> i64 {
let state = Arc::clone(&state_clone3);
let user = user_clone3.clone();
trace!(
"KB DOCUMENTS COUNT called for bot {} by user {}",
user.bot_id,
user.user_id
);
let rt = tokio::runtime::Handle::try_current();
if rt.is_err() {
error!("KB DOCUMENTS COUNT: No tokio runtime available");
return 0;
}
let result = rt.unwrap().block_on(async {
get_documents_count(&state, &user).await
});
result.unwrap_or(0)
});
// KB DOCUMENTS ADDED SINCE days - Get count of documents added since N days ago
let state_clone4 = Arc::clone(&state);
let user_clone4 = user.clone();
engine.register_fn("KB DOCUMENTS ADDED SINCE", move |days: i64| -> i64 {
let state = Arc::clone(&state_clone4);
let user = user_clone4.clone();
trace!(
"KB DOCUMENTS ADDED SINCE {} days called for bot {} by user {}",
days,
user.bot_id,
user.user_id
);
let rt = tokio::runtime::Handle::try_current();
if rt.is_err() {
error!("KB DOCUMENTS ADDED SINCE: No tokio runtime available");
return 0;
}
let result = rt.unwrap().block_on(async {
get_documents_added_since(&state, &user, days).await
});
result.unwrap_or(0)
});
// KB LIST COLLECTIONS - List all collections for bot
let state_clone5 = Arc::clone(&state);
let user_clone5 = user.clone();
engine.register_fn("KB LIST COLLECTIONS", move || -> Dynamic {
let state = Arc::clone(&state_clone5);
let user = user_clone5.clone();
trace!(
"KB LIST COLLECTIONS called for bot {} by user {}",
user.bot_id,
user.user_id
);
let rt = tokio::runtime::Handle::try_current();
if rt.is_err() {
error!("KB LIST COLLECTIONS: No tokio runtime available");
return Dynamic::UNIT;
}
let result = rt.unwrap().block_on(async {
list_collections(&state, &user).await
});
match result {
Ok(collections) => {
let arr: Vec<Dynamic> = collections
.into_iter()
.map(Dynamic::from)
.collect();
Dynamic::from(arr)
}
Err(e) => {
error!("Failed to list collections: {}", e);
Dynamic::UNIT
}
}
});
// KB STORAGE SIZE - Get total storage size in MB
let state_clone6 = Arc::clone(&state);
let user_clone6 = user.clone();
engine.register_fn("KB STORAGE SIZE", move || -> f64 {
let state = Arc::clone(&state_clone6);
let user = user_clone6.clone();
trace!(
"KB STORAGE SIZE called for bot {} by user {}",
user.bot_id,
user.user_id
);
let rt = tokio::runtime::Handle::try_current();
if rt.is_err() {
error!("KB STORAGE SIZE: No tokio runtime available");
return 0.0;
}
let result = rt.unwrap().block_on(async {
get_storage_size(&state, &user).await
});
result.unwrap_or(0.0)
});
}
/// Get comprehensive KB statistics
async fn get_kb_statistics(
state: &AppState,
user: &UserSession,
) -> Result<KBStatistics, Box<dyn std::error::Error + Send + Sync>> {
let qdrant_url = state.qdrant_url.clone().unwrap_or_else(|| "https://localhost:6334".to_string());
let client = reqwest::Client::builder()
.danger_accept_invalid_certs(true)
.build()?;
// Get list of collections
let collections_response = client
.get(format!("{}/collections", qdrant_url))
.send()
.await?;
let collections_json: serde_json::Value = collections_response.json().await?;
let collection_names: Vec<String> = collections_json["result"]["collections"]
.as_array()
.unwrap_or(&vec![])
.iter()
.filter_map(|c| c["name"].as_str().map(|s| s.to_string()))
.filter(|name| name.starts_with(&format!("kb_{}", user.bot_id)))
.collect();
let mut total_documents = 0u64;
let mut total_vectors = 0u64;
let mut total_disk_size = 0u64;
let mut total_ram_size = 0u64;
let mut collections = Vec::new();
for collection_name in &collection_names {
if let Ok(stats) = get_collection_statistics(state, collection_name).await {
total_documents += stats.points_count;
total_vectors += stats.vectors_count;
total_disk_size += stats.disk_data_size;
total_ram_size += stats.ram_data_size;
collections.push(stats);
}
}
// Get documents added in last week and month from database
let documents_added_last_week = get_documents_added_since(state, user, 7).await.unwrap_or(0) as u64;
let documents_added_last_month = get_documents_added_since(state, user, 30).await.unwrap_or(0) as u64;
Ok(KBStatistics {
total_collections: collection_names.len() as u64,
total_documents,
total_vectors,
total_disk_size_mb: total_disk_size as f64 / (1024.0 * 1024.0),
total_ram_size_mb: total_ram_size as f64 / (1024.0 * 1024.0),
documents_added_last_week,
documents_added_last_month,
collections,
})
}
/// Get statistics for a specific collection
async fn get_collection_statistics(
state: &AppState,
collection_name: &str,
) -> Result<CollectionStats, Box<dyn std::error::Error + Send + Sync>> {
let qdrant_url = state.qdrant_url.clone().unwrap_or_else(|| "https://localhost:6334".to_string());
let client = reqwest::Client::builder()
.danger_accept_invalid_certs(true)
.build()?;
let response = client
.get(format!("{}/collections/{}", qdrant_url, collection_name))
.send()
.await?;
let json: serde_json::Value = response.json().await?;
let result = &json["result"];
Ok(CollectionStats {
name: collection_name.to_string(),
vectors_count: result["vectors_count"].as_u64().unwrap_or(0),
points_count: result["points_count"].as_u64().unwrap_or(0),
segments_count: result["segments_count"].as_u64().unwrap_or(0),
disk_data_size: result["disk_data_size"].as_u64().unwrap_or(0),
ram_data_size: result["ram_data_size"].as_u64().unwrap_or(0),
indexed_vectors_count: result["indexed_vectors_count"].as_u64().unwrap_or(0),
status: result["status"].as_str().unwrap_or("unknown").to_string(),
})
}
/// Get total document count for a bot
async fn get_documents_count(
state: &AppState,
user: &UserSession,
) -> Result<i64, Box<dyn std::error::Error + Send + Sync>> {
use diesel::prelude::*;
use diesel::sql_query;
use diesel::sql_types::BigInt;
#[derive(QueryableByName)]
struct CountResult {
#[diesel(sql_type = BigInt)]
count: i64,
}
let mut conn = state.conn.get()?;
let bot_id = user.bot_id.to_string();
let result: CountResult = sql_query(
"SELECT COUNT(*) as count FROM kb_documents WHERE bot_id = $1"
)
.bind::<diesel::sql_types::Text, _>(&bot_id)
.get_result(&mut *conn)?;
Ok(result.count)
}
/// Get count of documents added since N days ago
async fn get_documents_added_since(
state: &AppState,
user: &UserSession,
days: i64,
) -> Result<i64, Box<dyn std::error::Error + Send + Sync>> {
use diesel::prelude::*;
use diesel::sql_query;
use diesel::sql_types::{BigInt, Text, Integer};
#[derive(QueryableByName)]
struct CountResult {
#[diesel(sql_type = BigInt)]
count: i64,
}
let mut conn = state.conn.get()?;
let bot_id = user.bot_id.to_string();
let result: CountResult = sql_query(
"SELECT COUNT(*) as count FROM kb_documents
WHERE bot_id = $1
AND created_at >= NOW() - INTERVAL '1 day' * $2"
)
.bind::<Text, _>(&bot_id)
.bind::<Integer, _>(days as i32)
.get_result(&mut *conn)?;
Ok(result.count)
}
/// List all collections for a bot
async fn list_collections(
state: &AppState,
user: &UserSession,
) -> Result<Vec<String>, Box<dyn std::error::Error + Send + Sync>> {
let qdrant_url = state.qdrant_url.clone().unwrap_or_else(|| "https://localhost:6334".to_string());
let client = reqwest::Client::builder()
.danger_accept_invalid_certs(true)
.build()?;
let response = client
.get(format!("{}/collections", qdrant_url))
.send()
.await?;
let json: serde_json::Value = response.json().await?;
let collections: Vec<String> = json["result"]["collections"]
.as_array()
.unwrap_or(&vec![])
.iter()
.filter_map(|c| c["name"].as_str().map(|s| s.to_string()))
.filter(|name| name.starts_with(&format!("kb_{}", user.bot_id)))
.collect();
Ok(collections)
}
/// Get total storage size in MB for a bot's collections
async fn get_storage_size(
state: &AppState,
user: &UserSession,
) -> Result<f64, Box<dyn std::error::Error + Send + Sync>> {
let stats = get_kb_statistics(state, user).await?;
Ok(stats.total_disk_size_mb)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_collection_stats_serialization() {
let stats = CollectionStats {
name: "test_collection".to_string(),
vectors_count: 1000,
points_count: 1000,
segments_count: 2,
disk_data_size: 1024 * 1024,
ram_data_size: 512 * 1024,
indexed_vectors_count: 1000,
status: "green".to_string(),
};
let json = serde_json::to_string(&stats).unwrap();
assert!(json.contains("test_collection"));
assert!(json.contains("1000"));
}
#[test]
fn test_kb_statistics_serialization() {
let stats = KBStatistics {
total_collections: 3,
total_documents: 5000,
total_vectors: 5000,
total_disk_size_mb: 10.5,
total_ram_size_mb: 5.2,
documents_added_last_week: 100,
documents_added_last_month: 500,
collections: vec![],
};
let json = serde_json::to_string(&stats).unwrap();
assert!(json.contains("5000"));
assert!(json.contains("10.5"));
}
}

View file

@ -23,6 +23,7 @@ pub mod get;
pub mod hear_talk;
pub mod http_operations;
pub mod import_export;
pub mod kb_statistics;
pub mod last;
pub mod lead_scoring;
pub mod llm_keyword;

View file

@ -6,6 +6,7 @@ pub mod directory;
pub mod dns;
pub mod kb;
pub mod package_manager;
pub mod secrets;
pub mod session;
pub mod shared;
pub mod ui_server;

View file

@ -52,6 +52,8 @@ impl PackageManager {
self.register_devtools();
self.register_vector_db();
self.register_timeseries_db();
self.register_secrets();
self.register_observability();
self.register_host();
}
@ -188,8 +190,12 @@ impl PackageManager {
post_install_cmds_windows: vec![],
env_vars: HashMap::new(),
data_download_list: vec![
// Default small model for CPU or minimal GPU (4GB VRAM)
"https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Q3_K_M.gguf".to_string(),
// Embedding model for vector search
"https://huggingface.co/CompendiumLabs/bge-small-en-v1.5-gguf/resolve/main/bge-small-en-v1.5-f32.gguf".to_string(),
// GPT-OSS 20B F16 - Recommended for small GPU (16GB VRAM), no CPU
// Uncomment to download: "https://huggingface.co/unsloth/gpt-oss-20b-GGUF/resolve/main/gpt-oss-20b-F16.gguf".to_string(),
],
exec_cmd: "nohup {{BIN_PATH}}/llama-server --port 8081 --ssl-key-file {{CONF_PATH}}/system/certificates/llm/server.key --ssl-cert-file {{CONF_PATH}}/system/certificates/llm/server.crt -m {{DATA_PATH}}/DeepSeek-R1-Distill-Qwen-1.5B-Q3_K_M.gguf > {{LOGS_PATH}}/llm.log 2>&1 & nohup {{BIN_PATH}}/llama-server --port 8082 --ssl-key-file {{CONF_PATH}}/system/certificates/embedding/server.key --ssl-cert-file {{CONF_PATH}}/system/certificates/embedding/server.crt -m {{DATA_PATH}}/bge-small-en-v1.5-f32.gguf --embedding > {{LOGS_PATH}}/embedding.log 2>&1 &".to_string(),
check_cmd: "curl -f -k https://localhost:8081/health >/dev/null 2>&1 && curl -f -k https://localhost:8082/health >/dev/null 2>&1".to_string(),
@ -333,8 +339,7 @@ impl PackageManager {
ports: vec![],
dependencies: vec!["alm".to_string()],
linux_packages: vec![
],
linux_packages: vec![],
macos_packages: vec!["git".to_string(), "node".to_string()],
windows_packages: vec![],
download_url: Some(
@ -342,15 +347,27 @@ impl PackageManager {
),
binary_name: Some("forgejo-runner".to_string()),
pre_install_cmds_linux: vec![
"mkdir -p {{CONF_PATH}}/alm-ci".to_string(),
],
post_install_cmds_linux: vec![
// Register runner with Forgejo instance
// Token must be obtained from Forgejo admin panel: Site Administration > Actions > Runners
"echo 'To register the runner, run:'".to_string(),
"echo '{{BIN_PATH}}/forgejo-runner register --instance $ALM_URL --token $ALM_RUNNER_TOKEN --name gbo --labels ubuntu-latest:docker://node:20-bookworm'".to_string(),
"echo 'Then start with: {{BIN_PATH}}/forgejo-runner daemon --config {{CONF_PATH}}/alm-ci/config.yaml'".to_string(),
],
post_install_cmds_linux: vec![],
pre_install_cmds_macos: vec![],
post_install_cmds_macos: vec![],
pre_install_cmds_windows: vec![],
post_install_cmds_windows: vec![],
env_vars: HashMap::new(),
env_vars: {
let mut env = HashMap::new();
env.insert("ALM_URL".to_string(), "$ALM_URL".to_string());
env.insert("ALM_RUNNER_TOKEN".to_string(), "$ALM_RUNNER_TOKEN".to_string());
env
},
data_download_list: Vec::new(),
exec_cmd: "{{BIN_PATH}}/forgejo-runner daemon --config {{CONF_PATH}}/config.yaml".to_string(),
exec_cmd: "{{BIN_PATH}}/forgejo-runner daemon --config {{CONF_PATH}}/alm-ci/config.yaml".to_string(),
check_cmd: "ps -ef | grep forgejo-runner | grep -v grep | grep {{BIN_PATH}}".to_string(),
},
);
@ -653,6 +670,109 @@ impl PackageManager {
);
}
/// Register HashiCorp Vault for secrets management
/// Vault stores service credentials (drive, email, etc.) securely
/// Only VAULT_ADDR and VAULT_TOKEN needed in .env, all other secrets fetched from Vault
fn register_secrets(&mut self) {
self.components.insert(
"secrets".to_string(),
ComponentConfig {
name: "secrets".to_string(),
ports: vec![8200],
dependencies: vec![],
linux_packages: vec![],
macos_packages: vec![],
windows_packages: vec![],
download_url: Some(
"https://releases.hashicorp.com/vault/1.15.4/vault_1.15.4_linux_amd64.zip".to_string(),
),
binary_name: Some("vault".to_string()),
pre_install_cmds_linux: vec![
"mkdir -p {{DATA_PATH}}/vault".to_string(),
"mkdir -p {{CONF_PATH}}/vault".to_string(),
],
post_install_cmds_linux: vec![
// Initialize Vault and store root token
"{{BIN_PATH}}/vault operator init -key-shares=1 -key-threshold=1 -format=json > {{CONF_PATH}}/vault/init.json".to_string(),
// Extract and store unseal key and root token
"VAULT_UNSEAL_KEY=$(cat {{CONF_PATH}}/vault/init.json | grep -o '\"unseal_keys_b64\":\\[\"[^\"]*\"' | cut -d'\"' -f4)".to_string(),
"VAULT_ROOT_TOKEN=$(cat {{CONF_PATH}}/vault/init.json | grep -o '\"root_token\":\"[^\"]*\"' | cut -d'\"' -f4)".to_string(),
// Unseal vault
"{{BIN_PATH}}/vault operator unseal $VAULT_UNSEAL_KEY".to_string(),
// Enable KV secrets engine
"VAULT_TOKEN=$VAULT_ROOT_TOKEN {{BIN_PATH}}/vault secrets enable -path=gbo kv-v2".to_string(),
// Store initial secrets paths
"VAULT_TOKEN=$VAULT_ROOT_TOKEN {{BIN_PATH}}/vault kv put gbo/drive accesskey={{GENERATED_PASSWORD}} secret={{GENERATED_PASSWORD}}".to_string(),
"VAULT_TOKEN=$VAULT_ROOT_TOKEN {{BIN_PATH}}/vault kv put gbo/tables username=gbuser password={{GENERATED_PASSWORD}}".to_string(),
"VAULT_TOKEN=$VAULT_ROOT_TOKEN {{BIN_PATH}}/vault kv put gbo/cache password={{GENERATED_PASSWORD}}".to_string(),
"VAULT_TOKEN=$VAULT_ROOT_TOKEN {{BIN_PATH}}/vault kv put gbo/directory client_id= client_secret=".to_string(),
"echo 'Vault initialized. Add VAULT_ADDR=https://localhost:8200 and VAULT_TOKEN to .env'".to_string(),
"chmod 600 {{CONF_PATH}}/vault/init.json".to_string(),
],
pre_install_cmds_macos: vec![
"mkdir -p {{DATA_PATH}}/vault".to_string(),
"mkdir -p {{CONF_PATH}}/vault".to_string(),
],
post_install_cmds_macos: vec![],
pre_install_cmds_windows: vec![],
post_install_cmds_windows: vec![],
env_vars: {
let mut env = HashMap::new();
env.insert("VAULT_ADDR".to_string(), "https://localhost:8200".to_string());
env.insert("VAULT_SKIP_VERIFY".to_string(), "true".to_string());
env
},
data_download_list: Vec::new(),
exec_cmd: "{{BIN_PATH}}/vault server -config={{CONF_PATH}}/vault/config.hcl".to_string(),
check_cmd: "curl -f -k https://localhost:8200/v1/sys/health >/dev/null 2>&1".to_string(),
},
);
}
/// Register Vector for observability (log aggregation and metrics)
/// Component name: observability (like drive for minio)
/// Config path: ./botserver-stack/conf/monitoring/vector.toml
/// Logs path: ./botserver-stack/logs/ (monitors all component logs)
fn register_observability(&mut self) {
self.components.insert(
"observability".to_string(),
ComponentConfig {
name: "observability".to_string(),
ports: vec![8686], // Vector API port
dependencies: vec!["timeseries_db".to_string()],
linux_packages: vec![],
macos_packages: vec![],
windows_packages: vec![],
download_url: Some(
"https://packages.timber.io/vector/0.35.0/vector-0.35.0-x86_64-unknown-linux-gnu.tar.gz".to_string(),
),
binary_name: Some("vector".to_string()),
pre_install_cmds_linux: vec![
"mkdir -p {{CONF_PATH}}/monitoring".to_string(),
"mkdir -p {{DATA_PATH}}/vector".to_string(),
],
post_install_cmds_linux: vec![],
pre_install_cmds_macos: vec![
"mkdir -p {{CONF_PATH}}/monitoring".to_string(),
"mkdir -p {{DATA_PATH}}/vector".to_string(),
],
post_install_cmds_macos: vec![],
pre_install_cmds_windows: vec![],
post_install_cmds_windows: vec![],
env_vars: HashMap::new(),
data_download_list: Vec::new(),
// Vector monitors all logs in botserver-stack/logs/
// - logs/system/ for botserver logs
// - logs/drive/ for minio logs
// - logs/tables/ for postgres logs
// - logs/cache/ for redis logs
// - etc.
exec_cmd: "{{BIN_PATH}}/vector --config {{CONF_PATH}}/monitoring/vector.toml".to_string(),
check_cmd: "curl -f http://localhost:8686/health >/dev/null 2>&1".to_string(),
},
);
}
fn register_host(&mut self) {
self.components.insert(
"host".to_string(),

745
src/core/secrets/mod.rs Normal file
View file

@ -0,0 +1,745 @@
//! Secrets Management Module
//!
//! Provides integration with HashiCorp Vault for secure secrets management.
//! Secrets are fetched from Vault at runtime, keeping .env minimal with only
//! VAULT_ADDR and VAULT_TOKEN.
//!
//! With Vault, .env contains ONLY:
//! - VAULT_ADDR - Vault server address
//! - VAULT_TOKEN - Vault authentication token
//!
//! Everything else is stored in Vault:
//!
//! Vault paths:
//! - gbo/directory - Zitadel connection (url, project_id, client_id, client_secret)
//! - gbo/tables - PostgreSQL credentials (host, port, database, username, password)
//! - gbo/drive - MinIO/S3 credentials (endpoint, accesskey, secret)
//! - gbo/cache - Redis credentials (host, port, password)
//! - gbo/email - Stalwart credentials (host, username, password)
//! - gbo/llm - LLM API keys (openai_key, anthropic_key, groq_key, deepseek_key)
//! - gbo/encryption - Encryption keys (master_key, data_key)
//! - gbo/meet - LiveKit credentials (url, api_key, api_secret)
//! - gbo/alm - Forgejo credentials (url, admin_password, runner_token)
//! - gbo/vectordb - Qdrant credentials (url, api_key)
//! - gbo/observability - InfluxDB credentials (url, org, token)
use anyhow::{anyhow, Context, Result};
use log::{debug, error, info, trace, warn};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::env;
use std::sync::Arc;
use tokio::sync::RwLock;
/// Secret paths in Vault
pub struct SecretPaths;
impl SecretPaths {
/// Directory service (Zitadel) - url, project_id, client_id, client_secret
pub const DIRECTORY: &'static str = "gbo/directory";
/// Database (PostgreSQL) - host, port, database, username, password
pub const TABLES: &'static str = "gbo/tables";
/// Object storage (MinIO) - endpoint, accesskey, secret
pub const DRIVE: &'static str = "gbo/drive";
/// Cache (Redis) - host, port, password
pub const CACHE: &'static str = "gbo/cache";
/// Email (Stalwart) - host, username, password
pub const EMAIL: &'static str = "gbo/email";
/// LLM providers - openai_key, anthropic_key, groq_key, deepseek_key, mistral_key
pub const LLM: &'static str = "gbo/llm";
/// Encryption - master_key, data_key
pub const ENCRYPTION: &'static str = "gbo/encryption";
/// Video meetings (LiveKit) - url, api_key, api_secret
pub const MEET: &'static str = "gbo/meet";
/// ALM (Forgejo) - url, admin_password, runner_token
pub const ALM: &'static str = "gbo/alm";
/// Vector database (Qdrant) - url, api_key
pub const VECTORDB: &'static str = "gbo/vectordb";
/// Observability (InfluxDB) - url, org, bucket, token
pub const OBSERVABILITY: &'static str = "gbo/observability";
}
/// Vault configuration
///
/// .env should contain ONLY these two variables:
/// - VAULT_ADDR=https://localhost:8200
/// - VAULT_TOKEN=hvs.xxxxxxxxxxxxx
///
/// All other configuration is fetched from Vault.
#[derive(Debug, Clone)]
pub struct VaultConfig {
/// Vault server address (e.g., https://localhost:8200)
pub addr: String,
/// Vault authentication token
pub token: String,
/// Skip TLS verification (for self-signed certs)
pub skip_verify: bool,
/// Cache TTL in seconds (0 = no caching)
pub cache_ttl: u64,
/// Namespace (for Vault Enterprise)
pub namespace: Option<String>,
}
impl Default for VaultConfig {
fn default() -> Self {
Self {
addr: env::var("VAULT_ADDR").unwrap_or_else(|_| "https://localhost:8200".to_string()),
token: env::var("VAULT_TOKEN").unwrap_or_default(),
skip_verify: env::var("VAULT_SKIP_VERIFY")
.map(|v| v == "true" || v == "1")
.unwrap_or(true),
cache_ttl: env::var("VAULT_CACHE_TTL")
.ok()
.and_then(|v| v.parse().ok())
.unwrap_or(300),
namespace: env::var("VAULT_NAMESPACE").ok(),
}
}
}
/// Cached secret with expiry
#[derive(Debug, Clone)]
struct CachedSecret {
data: HashMap<String, String>,
expires_at: std::time::Instant,
}
/// Vault response structures
#[derive(Debug, Deserialize)]
struct VaultResponse {
data: VaultData,
}
#[derive(Debug, Deserialize)]
struct VaultData {
data: HashMap<String, serde_json::Value>,
}
/// Secrets manager service
#[derive(Clone)]
pub struct SecretsManager {
config: VaultConfig,
client: reqwest::Client,
cache: Arc<RwLock<HashMap<String, CachedSecret>>>,
enabled: bool,
}
impl SecretsManager {
/// Create a new secrets manager
pub fn new(config: VaultConfig) -> Result<Self> {
let enabled = !config.token.is_empty() && !config.addr.is_empty();
if !enabled {
warn!("Vault not configured (VAULT_ADDR or VAULT_TOKEN missing). Using environment variables directly.");
}
let client = reqwest::Client::builder()
.danger_accept_invalid_certs(config.skip_verify)
.timeout(std::time::Duration::from_secs(10))
.build()
.context("Failed to create HTTP client")?;
Ok(Self {
config,
client,
cache: Arc::new(RwLock::new(HashMap::new())),
enabled,
})
}
/// Create with default configuration from environment
pub fn from_env() -> Result<Self> {
Self::new(VaultConfig::default())
}
/// Check if Vault is enabled
pub fn is_enabled(&self) -> bool {
self.enabled
}
/// Get a secret from Vault
pub async fn get_secret(&self, path: &str) -> Result<HashMap<String, String>> {
if !self.enabled {
return self.get_from_env(path);
}
// Check cache first
if let Some(cached) = self.get_cached(path).await {
trace!("Secret '{}' found in cache", path);
return Ok(cached);
}
// Fetch from Vault
let secret = self.fetch_from_vault(path).await?;
// Cache the result
if self.config.cache_ttl > 0 {
self.cache_secret(path, secret.clone()).await;
}
Ok(secret)
}
/// Get a single value from a secret path
pub async fn get_value(&self, path: &str, key: &str) -> Result<String> {
let secret = self.get_secret(path).await?;
secret
.get(key)
.cloned()
.ok_or_else(|| anyhow!("Key '{}' not found in secret '{}'", key, path))
}
/// Get drive credentials
pub async fn get_drive_credentials(&self) -> Result<(String, String)> {
let secret = self.get_secret(SecretPaths::DRIVE).await?;
Ok((
secret.get("accesskey").cloned().unwrap_or_default(),
secret.get("secret").cloned().unwrap_or_default(),
))
}
/// Get database credentials
pub async fn get_database_credentials(&self) -> Result<(String, String)> {
let secret = self.get_secret(SecretPaths::TABLES).await?;
Ok((
secret
.get("username")
.cloned()
.unwrap_or_else(|| "gbuser".to_string()),
secret.get("password").cloned().unwrap_or_default(),
))
}
/// Get cache (Redis) password
pub async fn get_cache_password(&self) -> Result<Option<String>> {
let secret = self.get_secret(SecretPaths::CACHE).await?;
Ok(secret.get("password").cloned())
}
/// Get directory (Zitadel) full configuration
/// Returns (url, project_id, client_id, client_secret)
pub async fn get_directory_config(&self) -> Result<(String, String, String, String)> {
let secret = self.get_secret(SecretPaths::DIRECTORY).await?;
Ok((
secret
.get("url")
.cloned()
.unwrap_or_else(|| "https://localhost:8080".to_string()),
secret.get("project_id").cloned().unwrap_or_default(),
secret.get("client_id").cloned().unwrap_or_default(),
secret.get("client_secret").cloned().unwrap_or_default(),
))
}
/// Get directory (Zitadel) credentials only
pub async fn get_directory_credentials(&self) -> Result<(String, String)> {
let secret = self.get_secret(SecretPaths::DIRECTORY).await?;
Ok((
secret.get("client_id").cloned().unwrap_or_default(),
secret.get("client_secret").cloned().unwrap_or_default(),
))
}
/// Get database full configuration
/// Returns (host, port, database, username, password)
pub async fn get_database_config(&self) -> Result<(String, u16, String, String, String)> {
let secret = self.get_secret(SecretPaths::TABLES).await?;
Ok((
secret
.get("host")
.cloned()
.unwrap_or_else(|| "localhost".to_string()),
secret
.get("port")
.and_then(|p| p.parse().ok())
.unwrap_or(5432),
secret
.get("database")
.cloned()
.unwrap_or_else(|| "botserver".to_string()),
secret
.get("username")
.cloned()
.unwrap_or_else(|| "gbuser".to_string()),
secret.get("password").cloned().unwrap_or_default(),
))
}
/// Get database connection URL
pub async fn get_database_url(&self) -> Result<String> {
let (host, port, database, username, password) = self.get_database_config().await?;
Ok(format!(
"postgres://{}:{}@{}:{}/{}",
username, password, host, port, database
))
}
/// Get vector database (Qdrant) configuration
pub async fn get_vectordb_config(&self) -> Result<(String, Option<String>)> {
let secret = self.get_secret(SecretPaths::VECTORDB).await?;
Ok((
secret
.get("url")
.cloned()
.unwrap_or_else(|| "https://localhost:6334".to_string()),
secret.get("api_key").cloned(),
))
}
/// Get observability (InfluxDB) configuration
pub async fn get_observability_config(&self) -> Result<(String, String, String, String)> {
let secret = self.get_secret(SecretPaths::OBSERVABILITY).await?;
Ok((
secret
.get("url")
.cloned()
.unwrap_or_else(|| "http://localhost:8086".to_string()),
secret
.get("org")
.cloned()
.unwrap_or_else(|| "pragmatismo".to_string()),
secret
.get("bucket")
.cloned()
.unwrap_or_else(|| "metrics".to_string()),
secret.get("token").cloned().unwrap_or_default(),
))
}
/// Get LLM API keys
pub async fn get_llm_api_key(&self, provider: &str) -> Result<Option<String>> {
let secret = self.get_secret(SecretPaths::LLM).await?;
let key = format!("{}_key", provider.to_lowercase());
Ok(secret.get(&key).cloned())
}
/// Get encryption key
pub async fn get_encryption_key(&self) -> Result<String> {
let secret = self.get_secret(SecretPaths::ENCRYPTION).await?;
secret
.get("master_key")
.cloned()
.ok_or_else(|| anyhow!("Encryption master key not found"))
}
/// Store a secret in Vault
pub async fn put_secret(&self, path: &str, data: HashMap<String, String>) -> Result<()> {
if !self.enabled {
warn!("Vault not enabled, cannot store secret at '{}'", path);
return Ok(());
}
let url = format!("{}/v1/secret/data/{}", self.config.addr, path);
let body = serde_json::json!({
"data": data
});
let response = self
.client
.post(&url)
.header("X-Vault-Token", &self.config.token)
.json(&body)
.send()
.await
.context("Failed to connect to Vault")?;
if !response.status().is_success() {
let status = response.status();
let error_text = response.text().await.unwrap_or_default();
return Err(anyhow!("Vault write failed ({}): {}", status, error_text));
}
// Invalidate cache
self.invalidate_cache(path).await;
info!("Secret stored at '{}'", path);
Ok(())
}
/// Delete a secret from Vault
pub async fn delete_secret(&self, path: &str) -> Result<()> {
if !self.enabled {
warn!("Vault not enabled, cannot delete secret at '{}'", path);
return Ok(());
}
let url = format!("{}/v1/secret/data/{}", self.config.addr, path);
let response = self
.client
.delete(&url)
.header("X-Vault-Token", &self.config.token)
.send()
.await
.context("Failed to connect to Vault")?;
if !response.status().is_success() {
let status = response.status();
let error_text = response.text().await.unwrap_or_default();
return Err(anyhow!("Vault delete failed ({}): {}", status, error_text));
}
// Invalidate cache
self.invalidate_cache(path).await;
info!("Secret deleted at '{}'", path);
Ok(())
}
/// Check Vault health
pub async fn health_check(&self) -> Result<bool> {
if !self.enabled {
return Ok(false);
}
let url = format!("{}/v1/sys/health", self.config.addr);
let response = self
.client
.get(&url)
.send()
.await
.context("Failed to connect to Vault")?;
// Vault returns 200 for initialized, unsealed, active
// 429 for unsealed, standby
// 472 for disaster recovery replication secondary
// 473 for performance standby
// 501 for not initialized
// 503 for sealed
Ok(response.status().as_u16() == 200 || response.status().as_u16() == 429)
}
/// Fetch secret from Vault API
async fn fetch_from_vault(&self, path: &str) -> Result<HashMap<String, String>> {
let url = format!("{}/v1/secret/data/{}", self.config.addr, path);
debug!("Fetching secret from Vault: {}", path);
let mut request = self
.client
.get(&url)
.header("X-Vault-Token", &self.config.token);
if let Some(ref namespace) = self.config.namespace {
request = request.header("X-Vault-Namespace", namespace);
}
let response = request.send().await.context("Failed to connect to Vault")?;
if response.status() == reqwest::StatusCode::NOT_FOUND {
debug!("Secret not found in Vault: {}", path);
return Ok(HashMap::new());
}
if !response.status().is_success() {
let status = response.status();
let error_text = response.text().await.unwrap_or_default();
return Err(anyhow!("Vault read failed ({}): {}", status, error_text));
}
let vault_response: VaultResponse = response
.json()
.await
.context("Failed to parse Vault response")?;
// Convert JSON values to strings
let data: HashMap<String, String> = vault_response
.data
.data
.into_iter()
.map(|(k, v)| {
let value = match v {
serde_json::Value::String(s) => s,
other => other.to_string().trim_matches('"').to_string(),
};
(k, value)
})
.collect();
debug!("Secret '{}' fetched from Vault ({} keys)", path, data.len());
Ok(data)
}
/// Get cached secret if not expired
async fn get_cached(&self, path: &str) -> Option<HashMap<String, String>> {
let cache = self.cache.read().await;
if let Some(cached) = cache.get(path) {
if cached.expires_at > std::time::Instant::now() {
return Some(cached.data.clone());
}
}
None
}
/// Cache a secret
async fn cache_secret(&self, path: &str, data: HashMap<String, String>) {
let mut cache = self.cache.write().await;
cache.insert(
path.to_string(),
CachedSecret {
data,
expires_at: std::time::Instant::now()
+ std::time::Duration::from_secs(self.config.cache_ttl),
},
);
}
/// Invalidate cached secret
async fn invalidate_cache(&self, path: &str) {
let mut cache = self.cache.write().await;
cache.remove(path);
}
/// Clear all cached secrets
pub async fn clear_cache(&self) {
let mut cache = self.cache.write().await;
cache.clear();
}
/// Fallback: get secrets from environment variables
fn get_from_env(&self, path: &str) -> Result<HashMap<String, String>> {
let mut data = HashMap::new();
match path {
SecretPaths::DRIVE => {
if let Ok(v) = env::var("DRIVE_ACCESSKEY") {
data.insert("accesskey".to_string(), v);
}
if let Ok(v) = env::var("DRIVE_SECRET") {
data.insert("secret".to_string(), v);
}
}
SecretPaths::TABLES => {
if let Ok(v) = env::var("DB_USER") {
data.insert("username".to_string(), v);
}
if let Ok(v) = env::var("DB_PASSWORD") {
data.insert("password".to_string(), v);
}
}
SecretPaths::CACHE => {
if let Ok(v) = env::var("REDIS_PASSWORD") {
data.insert("password".to_string(), v);
}
}
SecretPaths::DIRECTORY => {
if let Ok(v) = env::var("DIRECTORY_URL") {
data.insert("url".to_string(), v);
}
if let Ok(v) = env::var("DIRECTORY_PROJECT_ID") {
data.insert("project_id".to_string(), v);
}
if let Ok(v) = env::var("ZITADEL_CLIENT_ID") {
data.insert("client_id".to_string(), v);
}
if let Ok(v) = env::var("ZITADEL_CLIENT_SECRET") {
data.insert("client_secret".to_string(), v);
}
}
SecretPaths::TABLES => {
if let Ok(v) = env::var("DB_HOST") {
data.insert("host".to_string(), v);
}
if let Ok(v) = env::var("DB_PORT") {
data.insert("port".to_string(), v);
}
if let Ok(v) = env::var("DB_NAME") {
data.insert("database".to_string(), v);
}
if let Ok(v) = env::var("DB_USER") {
data.insert("username".to_string(), v);
}
if let Ok(v) = env::var("DB_PASSWORD") {
data.insert("password".to_string(), v);
}
// Also support DATABASE_URL for backwards compatibility
if let Ok(url) = env::var("DATABASE_URL") {
// Parse postgres://user:pass@host:port/db
if let Some(parsed) = parse_database_url(&url) {
data.extend(parsed);
}
}
}
SecretPaths::VECTORDB => {
if let Ok(v) = env::var("QDRANT_URL") {
data.insert("url".to_string(), v);
}
if let Ok(v) = env::var("QDRANT_API_KEY") {
data.insert("api_key".to_string(), v);
}
}
SecretPaths::OBSERVABILITY => {
if let Ok(v) = env::var("INFLUXDB_URL") {
data.insert("url".to_string(), v);
}
if let Ok(v) = env::var("INFLUXDB_ORG") {
data.insert("org".to_string(), v);
}
if let Ok(v) = env::var("INFLUXDB_BUCKET") {
data.insert("bucket".to_string(), v);
}
if let Ok(v) = env::var("INFLUXDB_TOKEN") {
data.insert("token".to_string(), v);
}
}
SecretPaths::EMAIL => {
if let Ok(v) = env::var("EMAIL_USER") {
data.insert("username".to_string(), v);
}
if let Ok(v) = env::var("EMAIL_PASSWORD") {
data.insert("password".to_string(), v);
}
}
SecretPaths::LLM => {
if let Ok(v) = env::var("OPENAI_API_KEY") {
data.insert("openai_key".to_string(), v);
}
if let Ok(v) = env::var("ANTHROPIC_API_KEY") {
data.insert("anthropic_key".to_string(), v);
}
if let Ok(v) = env::var("GROQ_API_KEY") {
data.insert("groq_key".to_string(), v);
}
}
SecretPaths::ENCRYPTION => {
if let Ok(v) = env::var("ENCRYPTION_KEY") {
data.insert("master_key".to_string(), v);
}
}
SecretPaths::MEET => {
if let Ok(v) = env::var("LIVEKIT_API_KEY") {
data.insert("api_key".to_string(), v);
}
if let Ok(v) = env::var("LIVEKIT_API_SECRET") {
data.insert("api_secret".to_string(), v);
}
}
SecretPaths::ALM => {
if let Ok(v) = env::var("ALM_URL") {
data.insert("url".to_string(), v);
}
if let Ok(v) = env::var("ALM_ADMIN_PASSWORD") {
data.insert("admin_password".to_string(), v);
}
if let Ok(v) = env::var("ALM_RUNNER_TOKEN") {
data.insert("runner_token".to_string(), v);
}
}
_ => {
warn!("Unknown secret path: {}", path);
}
}
Ok(data)
}
}
/// Parse a DATABASE_URL into individual components
fn parse_database_url(url: &str) -> Option<HashMap<String, String>> {
// postgres://user:pass@host:port/database
let url = url.strip_prefix("postgres://")?;
let mut data = HashMap::new();
// Split user:pass@host:port/database
let (auth, rest) = url.split_once('@')?;
let (user, pass) = auth.split_once(':').unwrap_or((auth, ""));
data.insert("username".to_string(), user.to_string());
data.insert("password".to_string(), pass.to_string());
// Split host:port/database
let (host_port, database) = rest.split_once('/').unwrap_or((rest, "botserver"));
let (host, port) = host_port.split_once(':').unwrap_or((host_port, "5432"));
data.insert("host".to_string(), host.to_string());
data.insert("port".to_string(), port.to_string());
data.insert("database".to_string(), database.to_string());
Some(data)
}
/// Initialize secrets manager from environment
///
/// .env should contain ONLY:
/// ```
/// VAULT_ADDR=https://localhost:8200
/// VAULT_TOKEN=hvs.xxxxxxxxxxxxx
/// ```
///
/// All other configuration is fetched from Vault at runtime.
pub fn init_secrets_manager() -> Result<SecretsManager> {
SecretsManager::from_env()
}
/// Bootstrap configuration structure
/// Used when Vault is not yet available (initial setup)
#[derive(Debug, Clone)]
pub struct BootstrapConfig {
pub vault_addr: String,
pub vault_token: String,
}
impl BootstrapConfig {
/// Load from .env file
pub fn from_env() -> Result<Self> {
Ok(Self {
vault_addr: env::var("VAULT_ADDR").context("VAULT_ADDR not set in .env")?,
vault_token: env::var("VAULT_TOKEN").context("VAULT_TOKEN not set in .env")?,
})
}
/// Check if .env is properly configured
pub fn is_configured() -> bool {
env::var("VAULT_ADDR").is_ok() && env::var("VAULT_TOKEN").is_ok()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_vault_config_default() {
// Temporarily set environment variables
std::env::set_var("VAULT_ADDR", "https://test:8200");
std::env::set_var("VAULT_TOKEN", "test-token");
let config = VaultConfig::default();
assert_eq!(config.addr, "https://test:8200");
assert_eq!(config.token, "test-token");
assert!(config.skip_verify);
// Clean up
std::env::remove_var("VAULT_ADDR");
std::env::remove_var("VAULT_TOKEN");
}
#[test]
fn test_secrets_manager_disabled_without_token() {
std::env::remove_var("VAULT_TOKEN");
std::env::set_var("VAULT_ADDR", "https://localhost:8200");
let manager = SecretsManager::from_env().unwrap();
assert!(!manager.is_enabled());
std::env::remove_var("VAULT_ADDR");
}
#[tokio::test]
async fn test_get_from_env_fallback() {
std::env::set_var("DRIVE_ACCESSKEY", "test-access");
std::env::set_var("DRIVE_SECRET", "test-secret");
std::env::remove_var("VAULT_TOKEN");
let manager = SecretsManager::from_env().unwrap();
let secret = manager.get_secret(SecretPaths::DRIVE).await.unwrap();
assert_eq!(secret.get("accesskey"), Some(&"test-access".to_string()));
assert_eq!(secret.get("secret"), Some(&"test-secret".to_string()));
std::env::remove_var("DRIVE_ACCESSKEY");
std::env::remove_var("DRIVE_SECRET");
}
}

View file

@ -1,20 +1,13 @@
TALK "Please, take a photo of the QR Code and send to me."
HEAR doc as QRCODE
text = null
PARAM doc AS QRCODE LIKE "photo of QR code" DESCRIPTION "QR Code image to scan and load document"
IF doc THEN
TALK "Reading document " + doc + "..."
text = GET doc
END IF
DESCRIPTION "Scan a QR Code to load and query a document"
text = GET doc
IF text THEN
text = "Based on this document, answer the person's questions:\n\n" + text
SET CONTEXT text
SET CONTEXT "Based on this document, answer the person's questions:\n\n" + text
TALK "Document ${doc} loaded. You can ask me anything about it."
TALK "Please, wait while I convert pages to images..."
SEND FILE doc
ELSE
TALK "Document was not found, please try again."
TALK "Document not found, please try again."
END IF

View file

@ -1,22 +1,27 @@
ADD TOOL "qr"
CLEAR SUGGESTIONS
ADD SUGGESTION "scan" AS "Scan a QR Code"
ADD SUGGESTION "find" AS "Find a procedure"
ADD SUGGESTION "help" AS "How to search documents"
BEGIN TALK
General Bots AI Search
General Bots AI Search
Comprehensive Document Search with AI summaries and EDM integration.
Comprehensive Document Search
Supports all document types, displays PDF pages with AI summaries, and integrates seamlessly with EDM systems.
**Options:**
Scan a QR Code - Send a photo to scan
Find a Procedure - Ask about any process
We are here to assist you! To get started, please choose one of the options below:
1 - Scan a QR Code: Send a photo of the QR Code you would like to scan typping 'qr'.
2 - Find a Procedure: If you need information about a specific procedure, just let me know what it is, and I will help you!
Examples:
How to send a fax?
How to clean the machine?
How to find a contact?
Lets get started!
**Examples:**
- How to send a fax?
- How to clean the machine?
- How to find a contact?
END TALK
BEGIN SYSTEM PROMPT
You are a document search assistant. Help users find procedures and information from documents.
When users want to scan QR codes, use the qr tool.
Provide clear, concise answers based on document content.
END SYSTEM PROMPT

View file

@ -0,0 +1,81 @@
REM Analytics Dashboard Start Dialog
REM Displays pre-computed statistics from update-stats.bas
REM No heavy computation at conversation start
DESCRIPTION "View knowledge base analytics and statistics"
REM Load pre-computed values from BOT MEMORY
totalDocs = GET BOT MEMORY("analytics_total_docs")
totalVectors = GET BOT MEMORY("analytics_total_vectors")
storageMB = GET BOT MEMORY("analytics_storage_mb")
collections = GET BOT MEMORY("analytics_collections")
docsWeek = GET BOT MEMORY("analytics_docs_week")
docsMonth = GET BOT MEMORY("analytics_docs_month")
growthRate = GET BOT MEMORY("analytics_growth_rate")
healthPercent = GET BOT MEMORY("analytics_health_percent")
lastUpdate = GET BOT MEMORY("analytics_last_update")
summary = GET BOT MEMORY("analytics_summary")
REM Set contexts for different report types
SET CONTEXT "overview" AS "Total documents: " + totalDocs + ", Storage: " + storageMB + " MB, Collections: " + collections
SET CONTEXT "activity" AS "Documents added this week: " + docsWeek + ", This month: " + docsMonth + ", Growth rate: " + growthRate + "%"
SET CONTEXT "health" AS "System health: " + healthPercent + "%, Last updated: " + lastUpdate
REM Clear and set up suggestions
CLEAR SUGGESTIONS
ADD SUGGESTION "overview" AS "Show overview"
ADD SUGGESTION "overview" AS "Storage usage"
ADD SUGGESTION "activity" AS "Recent activity"
ADD SUGGESTION "activity" AS "Growth trends"
ADD SUGGESTION "health" AS "System health"
ADD SUGGESTION "health" AS "Collection status"
REM Add tools for detailed reports
ADD TOOL "detailed-report"
ADD TOOL "export-stats"
REM Welcome message with pre-computed summary
IF summary <> "" THEN
TALK summary
TALK ""
END IF
TALK "📊 **Analytics Dashboard**"
TALK ""
IF totalDocs <> "" THEN
TALK "**Knowledge Base Overview**"
TALK "• Documents: " + FORMAT(totalDocs, "#,##0")
TALK "• Vectors: " + FORMAT(totalVectors, "#,##0")
TALK "• Storage: " + FORMAT(storageMB, "#,##0.00") + " MB"
TALK "• Collections: " + collections
TALK ""
TALK "**Recent Activity**"
TALK "• This week: +" + FORMAT(docsWeek, "#,##0") + " documents"
TALK "• This month: +" + FORMAT(docsMonth, "#,##0") + " documents"
IF growthRate <> "" THEN
IF growthRate > 0 THEN
TALK "• Trend: 📈 +" + FORMAT(growthRate, "#,##0.0") + "% vs average"
ELSE
TALK "• Trend: 📉 " + FORMAT(growthRate, "#,##0.0") + "% vs average"
END IF
END IF
TALK ""
IF healthPercent <> "" THEN
IF healthPercent = 100 THEN
TALK "✅ All systems healthy"
ELSE
TALK "⚠️ System health: " + FORMAT(healthPercent, "#,##0") + "%"
END IF
END IF
ELSE
TALK "Statistics are being computed. Please check back in a few minutes."
TALK "Run the update-stats schedule to refresh data."
END IF
TALK ""
TALK "Ask me about any metric or select a topic above."

View file

@ -0,0 +1,52 @@
REM Analytics Statistics Update
REM Runs hourly to pre-compute dashboard statistics
REM Similar pattern to announcements/update-summary.bas
SET SCHEDULE "0 * * * *"
REM Fetch KB statistics
stats = KB STATISTICS
statsObj = JSON PARSE stats
REM Store document counts
SET BOT MEMORY "analytics_total_docs", statsObj.total_documents
SET BOT MEMORY "analytics_total_vectors", statsObj.total_vectors
SET BOT MEMORY "analytics_storage_mb", statsObj.total_disk_size_mb
SET BOT MEMORY "analytics_collections", statsObj.total_collections
REM Store activity metrics
SET BOT MEMORY "analytics_docs_week", statsObj.documents_added_last_week
SET BOT MEMORY "analytics_docs_month", statsObj.documents_added_last_month
REM Calculate growth rate
IF statsObj.documents_added_last_month > 0 THEN
weeklyAvg = statsObj.documents_added_last_month / 4
IF weeklyAvg > 0 THEN
growthRate = ((statsObj.documents_added_last_week - weeklyAvg) / weeklyAvg) * 100
SET BOT MEMORY "analytics_growth_rate", growthRate
END IF
END IF
REM Check collection health
healthyCount = 0
totalCount = 0
FOR EACH coll IN statsObj.collections
totalCount = totalCount + 1
IF coll.status = "green" THEN
healthyCount = healthyCount + 1
END IF
NEXT
IF totalCount > 0 THEN
healthPercent = (healthyCount / totalCount) * 100
SET BOT MEMORY "analytics_health_percent", healthPercent
END IF
REM Store last update timestamp
SET BOT MEMORY "analytics_last_update", NOW()
REM Generate summary for quick display
summary = "📊 " + FORMAT(statsObj.total_documents, "#,##0") + " docs"
summary = summary + " | " + FORMAT(statsObj.total_disk_size_mb, "#,##0.0") + " MB"
summary = summary + " | +" + FORMAT(statsObj.documents_added_last_week, "#,##0") + " this week"
SET BOT MEMORY "analytics_summary", summary

View file

@ -1,9 +1,8 @@
PARAM subject as string
DESCRIPTION "Chamado quando alguém quer mudar o assunto da conversa."
PARAM subject AS STRING LIKE "circular" DESCRIPTION "Subject to switch conversation to: circular, comunicado, or geral"
kbname = LLM "Devolva uma única palavra circular, comunicado ou geral de acordo com a seguinte frase:" + subject
DESCRIPTION "Switch conversation subject when user wants to change topic"
kbname = LLM "Return single word: circular, comunicado or geral based on: " + subject
ADD_KB kbname
TALK "You have chosen to change the subject to " + subject + "."
TALK "Subject changed to " + subject

View file

@ -1,18 +1,21 @@
let resume1 = GET BOT MEMORY("resume")
let resume2 = GET BOT MEMORY("auxiliom")
let resume3 = GET BOT MEMORY("toolbix")
resume1 = GET BOT MEMORY("resume")
resume2 = GET BOT MEMORY("auxiliom")
resume3 = GET BOT MEMORY("toolbix")
SET CONTEXT "general" AS resume1
SET CONTEXT "auxiliom" AS resume2
SET CONTEXT "toolbix" AS resume3
SET CONTEXT "general" AS resume1
SET CONTEXT "auxiliom" AS resume2
SET CONTEXT "toolbix" AS resume3
CLEAR SUGGESTIONS
ADD SUGGESTION "general" AS "Show me the weekly announcements"
ADD SUGGESTION "auxiliom" AS "Explain Auxiliom to me"
ADD SUGGESTION "auxiliom" AS "What does Auxiliom provide?"
ADD SUGGESTION "toolbix" AS "Show me Toolbix features"
ADD SUGGESTION "toolbix" AS "How can Toolbix help my business?"
ADD SUGGESTION "general" AS "Weekly announcements"
ADD SUGGESTION "general" AS "Latest circulars"
ADD SUGGESTION "auxiliom" AS "What is Auxiliom?"
ADD SUGGESTION "auxiliom" AS "Auxiliom services"
ADD SUGGESTION "toolbix" AS "Toolbix features"
ADD SUGGESTION "toolbix" AS "Toolbix for business"
ADD TOOL "change-subject"
TALK resume1
TALK "You can ask me about any of the announcements or circulars."
TALK "Ask me about any announcement or circular."

View file

@ -1,12 +1,11 @@
SET SCHEDULE "59 * * * *"
let text = GET "announcements.gbkb/news/news.pdf"
let resume = LLM "In a few words, resume this: " + text
text = GET "announcements.gbkb/news/news.pdf"
resume = LLM "In a few words, resume this: " + text
SET BOT MEMORY "resume", resume
let text1 = GET "announcements.gbkb/auxiliom/auxiliom.pdf"
text1 = GET "announcements.gbkb/auxiliom/auxiliom.pdf"
SET BOT MEMORY "auxiliom", text1
let text2 = GET "announcements.gbkb/toolbix/toolbix.pdf"
text2 = GET "announcements.gbkb/toolbix/toolbix.pdf"
SET BOT MEMORY "toolbix", text2

View file

@ -1,18 +1,42 @@
list = DIR "default.gbdrive"
PARAM folder AS STRING LIKE "default.gbdrive" DESCRIPTION "Folder to backup files from" OPTIONAL
PARAM days_old AS INTEGER LIKE 3 DESCRIPTION "Archive files older than this many days" OPTIONAL
DESCRIPTION "Backup and archive expired files to server storage"
IF NOT folder THEN
folder = "default.gbdrive"
END IF
IF NOT days_old THEN
days_old = 3
END IF
list = DIR folder
archived = 0
FOR EACH item IN list
TALK "Checking: " + item.name
oldDays = DATEDIFF date, item.modified, "day"
oldDays = DATEDIFF today, item.modified, "day"
IF oldDays > 3 THEN
TALK "The file ${item.name} will be archived as it is expired."
IF oldDays > days_old THEN
blob = UPLOAD item
TALK "Upload to server completed."
SAVE "log.xlsx", "archived", today, now, item.path, item.name, item.size, item.modified, blob.md5
WITH logEntry
action = "archived"
date = today
time = now
path = item.path
name = item.name
size = item.size
modified = item.modified
md5 = blob.md5
END WITH
SAVE "log.xlsx", logEntry
DELETE item
TALK "File removed from storage."
ELSE
TALK "The file ${item.name} does not need to be archived."
archived = archived + 1
END IF
NEXT
TALK "Backup complete. " + archived + " files archived."
RETURN archived

View file

@ -0,0 +1,34 @@
ADD TOOL "backup-to-server"
ADD TOOL "restore-file"
ADD TOOL "list-archived"
ADD TOOL "cleanup-old"
CLEAR SUGGESTIONS
ADD SUGGESTION "backup" AS "Run backup now"
ADD SUGGESTION "list" AS "View archived files"
ADD SUGGESTION "restore" AS "Restore a file"
ADD SUGGESTION "status" AS "Backup status"
SET CONTEXT "backup" AS "You are a backup management assistant. Help users archive files to server storage, restore archived files, and manage backup schedules."
BEGIN TALK
**Backup Manager**
I can help you with:
Archive files to server storage
Restore archived files
View backup history
Manage backup schedules
Select an option or tell me what you need.
END TALK
BEGIN SYSTEM PROMPT
You are a backup management assistant.
Archive files older than specified days to server storage.
Track all backup operations in log.xlsx.
Support restore operations from archived files.
Maintain MD5 checksums for integrity verification.
END SYSTEM PROMPT

View file

@ -1 +0,0 @@
This is to be backed up.

View file

@ -1,685 +1,53 @@
' General Bots Conversational Banking
' Enterprise-grade banking through natural conversation
' Uses TOOLS (not SUBs) and HEAR AS validation
ADD TOOL "check-balance"
ADD TOOL "transfer-money"
ADD TOOL "pay-bill"
ADD TOOL "card-services"
ADD TOOL "loan-inquiry"
ADD TOOL "investment-info"
ADD TOOL "transaction-history"
ADD TOOL "open-account"
' ============================================================================
' CONFIGURATION
' ============================================================================
SET CONTEXT "You are a professional banking assistant for General Bank.
Help customers with accounts, transfers, payments, cards, loans, and investments.
Always verify identity before sensitive operations. Be helpful and secure.
Use the available tools to perform banking operations.
Never ask for full card numbers or passwords in chat."
USE KB "banking-faq"
' Add specialized bots for complex operations
ADD BOT "fraud-detector" WITH TRIGGER "suspicious, fraud, unauthorized, stolen, hack"
ADD BOT "investment-advisor" WITH TRIGGER "invest, stocks, funds, portfolio, returns, CDB, LCI"
ADD BOT "loan-specialist" WITH TRIGGER "loan, financing, credit, mortgage, empréstimo"
ADD BOT "card-services" WITH TRIGGER "card, limit, block, virtual card, cartão"
' ============================================================================
' BANKING TOOLS - Dynamic tools added to conversation
' ============================================================================
' Account Tools
USE TOOL "check_balance"
USE TOOL "get_statement"
USE TOOL "get_transactions"
' Transfer Tools
USE TOOL "pix_transfer"
USE TOOL "ted_transfer"
USE TOOL "schedule_transfer"
' Payment Tools
USE TOOL "pay_boleto"
USE TOOL "pay_utility"
USE TOOL "list_scheduled_payments"
' Card Tools
USE TOOL "list_cards"
USE TOOL "block_card"
USE TOOL "unblock_card"
USE TOOL "create_virtual_card"
USE TOOL "request_limit_increase"
' Loan Tools
USE TOOL "simulate_loan"
USE TOOL "apply_loan"
USE TOOL "list_loans"
' Investment Tools
USE TOOL "get_portfolio"
USE TOOL "list_investments"
USE TOOL "buy_investment"
USE TOOL "redeem_investment"
' ============================================================================
' AUTHENTICATION FLOW
' ============================================================================
authenticated = GET user_authenticated
IF NOT authenticated THEN
TALK "Welcome to General Bank! 🏦"
TALK "For your security, I need to verify your identity."
TALK ""
TALK "Please enter your CPF:"
HEAR cpf AS CPF
' Look up customer
customer = FIND "customers.csv" WHERE cpf = cpf
IF LEN(customer) = 0 THEN
TALK "I couldn't find an account with this CPF."
TALK "Please check the number or visit a branch to open an account."
ELSE
' Send verification code
phone_masked = MID(FIRST(customer).phone, 1, 4) + "****" + RIGHT(FIRST(customer).phone, 2)
TALK "I'll send a verification code to your phone ending in " + phone_masked
' Generate and store code
code = STR(INT(RND() * 900000) + 100000)
SET BOT MEMORY "verification_code", code
SET BOT MEMORY "verification_cpf", cpf
' In production: SEND SMS FIRST(customer).phone, "Your General Bank code is: " + code
TALK "Please enter the 6-digit code:"
HEAR entered_code AS INTEGER
stored_code = GET BOT MEMORY "verification_code"
IF STR(entered_code) = stored_code THEN
SET user_authenticated, TRUE
SET user_id, FIRST(customer).id
SET user_name, FIRST(customer).name
SET user_cpf, cpf
TALK "✅ Welcome, " + FIRST(customer).name + "!"
ELSE
TALK "❌ Invalid code. Please try again."
END IF
END IF
END IF
' ============================================================================
' MAIN CONVERSATION - LLM handles intent naturally
' ============================================================================
IF GET user_authenticated THEN
user_name = GET user_name
TALK ""
TALK "How can I help you today, " + user_name + "?"
TALK ""
TALK "You can ask me things like:"
TALK "• What's my balance?"
TALK "• Send R$ 100 via PIX to 11999998888"
TALK "• Pay this boleto: 23793.38128..."
TALK "• Block my credit card"
TALK "• Simulate a loan of R$ 10,000"
ADD SUGGESTION "Check balance"
ADD SUGGESTION "Make a transfer"
ADD SUGGESTION "Pay a bill"
ADD SUGGESTION "My cards"
END IF
' ============================================================================
' TOOL: check_balance
' Returns account balances for the authenticated user
' ============================================================================
' @tool check_balance
' @description Get account balances for the current user
' @param account_type string optional Filter by account type (checking, savings, all)
' @returns Account balances with available amounts
' ============================================================================
' TOOL: pix_transfer
' Performs a PIX transfer
' ============================================================================
' @tool pix_transfer
' @description Send money via PIX instant transfer
' @param pix_key string required The recipient's PIX key (CPF, phone, email, or random key)
' @param amount number required Amount to transfer in BRL
' @param description string optional Transfer description
' @returns Transfer confirmation with transaction ID
ON TOOL "pix_transfer"
pix_key = GET TOOL PARAM "pix_key"
amount = GET TOOL PARAM "amount"
description = GET TOOL PARAM "description"
' Validate PIX key format
TALK "🔍 Validating PIX key..."
' Get recipient info (simulated API call)
recipient_name = LLM "Given PIX key " + pix_key + ", return a realistic Brazilian name. Just the name, nothing else."
recipient_bank = "Banco Example"
TALK ""
TALK "📤 **Transfer Details**"
TALK "To: **" + recipient_name + "**"
TALK "Bank: " + recipient_bank
TALK "Amount: **R$ " + FORMAT(amount, "#,##0.00") + "**"
TALK ""
TALK "Confirm this PIX transfer?"
ADD SUGGESTION "Yes, confirm"
ADD SUGGESTION "No, cancel"
HEAR confirmation AS BOOLEAN
IF confirmation THEN
TALK "🔐 Enter your 4-digit PIN:"
HEAR pin AS INTEGER
' Validate PIN (in production, verify against stored hash)
IF LEN(STR(pin)) = 4 THEN
' Execute transfer
transaction_id = "PIX" + FORMAT(NOW(), "yyyyMMddHHmmss") + STR(INT(RND() * 1000))
' Get current balance
user_id = GET user_id
account = FIRST(FIND "accounts.csv" WHERE user_id = user_id)
new_balance = account.balance - amount
' Save transaction
TABLE transaction
ROW transaction_id, account.account_number, "pix_out", -amount, new_balance, NOW(), pix_key, recipient_name, "completed"
END TABLE
SAVE "transactions.csv", transaction
' Update balance
UPDATE "accounts.csv" SET balance = new_balance WHERE id = account.id
TALK ""
TALK "✅ **PIX Transfer Completed!**"
TALK ""
TALK "Transaction ID: " + transaction_id
TALK "Amount: R$ " + FORMAT(amount, "#,##0.00")
TALK "New Balance: R$ " + FORMAT(new_balance, "#,##0.00")
TALK "Date: " + FORMAT(NOW(), "dd/MM/yyyy HH:mm")
RETURN transaction_id
ELSE
TALK "❌ Invalid PIN format."
RETURN "CANCELLED"
END IF
ELSE
TALK "Transfer cancelled."
RETURN "CANCELLED"
END IF
END ON
' ============================================================================
' TOOL: pay_boleto
' Pays a Brazilian bank slip (boleto)
' ============================================================================
' @tool pay_boleto
' @description Pay a boleto (bank slip) using the barcode
' @param barcode string required The boleto barcode (47 or 48 digits)
' @returns Payment confirmation
ON TOOL "pay_boleto"
barcode = GET TOOL PARAM "barcode"
' Clean barcode
barcode = REPLACE(REPLACE(REPLACE(barcode, ".", ""), " ", ""), "-", "")
IF LEN(barcode) <> 47 AND LEN(barcode) <> 48 THEN
TALK "❌ Invalid barcode. Please enter all 47 or 48 digits."
RETURN "INVALID_BARCODE"
END IF
' Parse boleto (simplified - in production use banking API)
beneficiary = "Company " + LEFT(barcode, 3)
amount = VAL(MID(barcode, 38, 10)) / 100
due_date = DATEADD(NOW(), INT(RND() * 30), "day")
TALK ""
TALK "📄 **Bill Details**"
TALK "Beneficiary: **" + beneficiary + "**"
TALK "Amount: **R$ " + FORMAT(amount, "#,##0.00") + "**"
TALK "Due Date: " + FORMAT(due_date, "dd/MM/yyyy")
TALK ""
TALK "Pay this bill now?"
ADD SUGGESTION "Yes, pay now"
ADD SUGGESTION "Schedule for due date"
ADD SUGGESTION "Cancel"
HEAR choice AS "Pay now", "Schedule", "Cancel"
IF choice = "Pay now" THEN
TALK "🔐 Enter your PIN:"
HEAR pin AS INTEGER
IF LEN(STR(pin)) = 4 THEN
transaction_id = "BOL" + FORMAT(NOW(), "yyyyMMddHHmmss")
auth_code = FORMAT(INT(RND() * 100000000), "00000000")
TALK ""
TALK "✅ **Payment Completed!**"
TALK ""
TALK "Transaction ID: " + transaction_id
TALK "Authentication: " + auth_code
TALK "Amount: R$ " + FORMAT(amount, "#,##0.00")
RETURN transaction_id
ELSE
TALK "❌ Invalid PIN."
RETURN "INVALID_PIN"
END IF
ELSEIF choice = "Schedule" THEN
TABLE scheduled
ROW NOW(), GET user_id, "boleto", barcode, amount, due_date, "pending"
END TABLE
SAVE "scheduled_payments.csv", scheduled
TALK "✅ Payment scheduled for " + FORMAT(due_date, "dd/MM/yyyy")
RETURN "SCHEDULED"
ELSE
TALK "Payment cancelled."
RETURN "CANCELLED"
END IF
END ON
' ============================================================================
' TOOL: block_card
' Blocks a card for security
' ============================================================================
' @tool block_card
' @description Block a credit or debit card
' @param card_type string optional Type of card to block (credit, debit, all)
' @param reason string optional Reason for blocking (lost, stolen, suspicious, temporary)
' @returns Block confirmation
ON TOOL "block_card"
card_type = GET TOOL PARAM "card_type"
reason = GET TOOL PARAM "reason"
user_id = GET user_id
cards = FIND "cards.csv" WHERE user_id = user_id AND status = "active"
IF LEN(cards) = 0 THEN
TALK "You don't have any active cards to block."
RETURN "NO_CARDS"
END IF
IF card_type = "" OR card_type = "all" THEN
TALK "Which card do you want to block?"
FOR i = 1 TO LEN(cards)
card = cards[i]
masked = "**** " + RIGHT(card.card_number, 4)
TALK STR(i) + ". " + UPPER(card.card_type) + " - " + masked
ADD SUGGESTION card.card_type + " " + RIGHT(card.card_number, 4)
NEXT
HEAR selection AS INTEGER
IF selection < 1 OR selection > LEN(cards) THEN
TALK "Invalid selection."
RETURN "INVALID_SELECTION"
END IF
selected_card = cards[selection]
ELSE
selected_card = FIRST(FILTER cards WHERE card_type = card_type)
END IF
IF reason = "" THEN
TALK "Why are you blocking this card?"
ADD SUGGESTION "Lost"
ADD SUGGESTION "Stolen"
ADD SUGGESTION "Suspicious activity"
ADD SUGGESTION "Temporary block"
HEAR reason AS "Lost", "Stolen", "Suspicious activity", "Temporary block"
END IF
' Block the card
UPDATE "cards.csv" SET status = "blocked", blocked_reason = reason, blocked_at = NOW() WHERE id = selected_card.id
masked = "**** " + RIGHT(selected_card.card_number, 4)
TALK ""
TALK "🔒 **Card Blocked**"
TALK ""
TALK "Card: " + UPPER(selected_card.card_type) + " " + masked
TALK "Reason: " + reason
TALK "Blocked at: " + FORMAT(NOW(), "dd/MM/yyyy HH:mm")
IF reason = "Stolen" OR reason = "Lost" THEN
TALK ""
TALK "⚠️ For your security, we recommend requesting a replacement card."
TALK "Would you like me to request a new card?"
ADD SUGGESTION "Yes, request new card"
ADD SUGGESTION "No, not now"
HEAR request_new AS BOOLEAN
IF request_new THEN
TALK "✅ New card requested! It will arrive in 5-7 business days."
END IF
END IF
RETURN "BLOCKED"
END ON
' ============================================================================
' TOOL: simulate_loan
' Simulates loan options
' ============================================================================
' @tool simulate_loan
' @description Simulate a personal loan with different terms
' @param amount number required Loan amount in BRL
' @param months integer optional Number of months (12, 24, 36, 48, 60)
' @param loan_type string optional Type of loan (personal, payroll, home_equity)
' @returns Loan simulation with monthly payments
ON TOOL "simulate_loan"
amount = GET TOOL PARAM "amount"
months = GET TOOL PARAM "months"
loan_type = GET TOOL PARAM "loan_type"
IF amount < 500 THEN
TALK "Minimum loan amount is R$ 500.00"
RETURN "AMOUNT_TOO_LOW"
END IF
IF amount > 100000 THEN
TALK "For amounts above R$ 100,000, please visit a branch."
RETURN "AMOUNT_TOO_HIGH"
END IF
IF months = 0 THEN
TALK "In how many months would you like to pay?"
ADD SUGGESTION "12 months"
ADD SUGGESTION "24 months"
ADD SUGGESTION "36 months"
ADD SUGGESTION "48 months"
ADD SUGGESTION "60 months"
HEAR months_input AS INTEGER
months = months_input
END IF
IF loan_type = "" THEN
loan_type = "personal"
END IF
' Calculate rates based on type
IF loan_type = "payroll" THEN
monthly_rate = 0.0149
rate_label = "1.49%"
ELSEIF loan_type = "home_equity" THEN
monthly_rate = 0.0099
rate_label = "0.99%"
ELSE
monthly_rate = 0.0199
rate_label = "1.99%"
END IF
' PMT calculation
pmt = amount * (monthly_rate * POWER(1 + monthly_rate, months)) / (POWER(1 + monthly_rate, months) - 1)
total = pmt * months
interest_total = total - amount
TALK ""
TALK "💰 **Loan Simulation**"
TALK ""
TALK "📊 **" + UPPER(loan_type) + " LOAN**"
TALK ""
TALK "Amount: R$ " + FORMAT(amount, "#,##0.00")
TALK "Term: " + STR(months) + " months"
TALK "Interest Rate: " + rate_label + " per month"
TALK ""
TALK "📅 **Monthly Payment: R$ " + FORMAT(pmt, "#,##0.00") + "**"
TALK ""
TALK "Total to pay: R$ " + FORMAT(total, "#,##0.00")
TALK "Total interest: R$ " + FORMAT(interest_total, "#,##0.00")
TALK ""
TALK "Would you like to apply for this loan?"
ADD SUGGESTION "Yes, apply now"
ADD SUGGESTION "Try different values"
ADD SUGGESTION "Not now"
HEAR decision AS "Apply", "Try again", "No"
IF decision = "Apply" THEN
TALK "Great! Let me collect some additional information."
TALK "What is your monthly income?"
HEAR income AS MONEY
TALK "What is your profession?"
HEAR profession AS NAME
' Check debt-to-income ratio
IF pmt > income * 0.35 THEN
TALK "⚠️ The monthly payment exceeds 35% of your income."
TALK "We recommend a smaller amount or longer term."
RETURN "HIGH_DTI"
END IF
application_id = "LOAN" + FORMAT(NOW(), "yyyyMMddHHmmss")
TABLE loan_application
ROW application_id, GET user_id, loan_type, amount, months, monthly_rate, income, profession, NOW(), "pending"
END TABLE
SAVE "loan_applications.csv", loan_application
TALK ""
TALK "🎉 **Application Submitted!**"
TALK ""
TALK "Application ID: " + application_id
TALK "Status: Under Analysis"
TALK ""
TALK "We'll analyze your application within 24 hours."
TALK "You'll receive updates via app notifications."
RETURN application_id
ELSEIF decision = "Try again" THEN
TALK "No problem! What values would you like to try?"
RETURN "RETRY"
ELSE
TALK "No problem! I'm here whenever you need."
RETURN "DECLINED"
END IF
END ON
' ============================================================================
' TOOL: create_virtual_card
' Creates a virtual card for online purchases
' ============================================================================
' @tool create_virtual_card
' @description Create a virtual credit card for online shopping
' @param limit number optional Maximum limit for the virtual card
' @returns Virtual card details
ON TOOL "create_virtual_card"
limit = GET TOOL PARAM "limit"
user_id = GET user_id
credit_cards = FIND "cards.csv" WHERE user_id = user_id AND card_type = "credit" AND status = "active"
IF LEN(credit_cards) = 0 THEN
TALK "You need an active credit card to create virtual cards."
RETURN "NO_CREDIT_CARD"
END IF
main_card = FIRST(credit_cards)
IF limit = 0 THEN
TALK "What limit would you like for this virtual card?"
TALK "Available credit: R$ " + FORMAT(main_card.available_limit, "#,##0.00")
ADD SUGGESTION "R$ 100"
ADD SUGGESTION "R$ 500"
ADD SUGGESTION "R$ 1000"
ADD SUGGESTION "Custom amount"
HEAR limit AS MONEY
END IF
IF limit > main_card.available_limit THEN
TALK "❌ Limit exceeds available credit."
TALK "Maximum available: R$ " + FORMAT(main_card.available_limit, "#,##0.00")
RETURN "LIMIT_EXCEEDED"
END IF
' Generate virtual card
virtual_number = "4" + FORMAT(INT(RND() * 1000000000000000), "000000000000000")
virtual_cvv = FORMAT(INT(RND() * 1000), "000")
virtual_expiry = FORMAT(DATEADD(NOW(), 1, "year"), "MM/yy")
virtual_id = "VC" + FORMAT(NOW(), "yyyyMMddHHmmss")
TABLE virtual_card
ROW virtual_id, user_id, main_card.id, "virtual", virtual_number, virtual_cvv, virtual_expiry, limit, limit, "active", NOW()
END TABLE
SAVE "cards.csv", virtual_card
' Format card number for display
formatted_number = LEFT(virtual_number, 4) + " " + MID(virtual_number, 5, 4) + " " + MID(virtual_number, 9, 4) + " " + RIGHT(virtual_number, 4)
TALK ""
TALK "✅ **Virtual Card Created!**"
TALK ""
TALK "🔢 Number: " + formatted_number
TALK "📅 Expiry: " + virtual_expiry
TALK "🔐 CVV: " + virtual_cvv
TALK "💰 Limit: R$ " + FORMAT(limit, "#,##0.00")
TALK ""
TALK "⚠️ **Save these details now!**"
TALK "The CVV will not be shown again for security."
TALK ""
TALK "This virtual card is linked to your main credit card."
TALK "You can delete it anytime."
RETURN virtual_id
END ON
' ============================================================================
' TOOL: get_statement
' Gets account statement
' ============================================================================
' @tool get_statement
' @description Get account statement for a period
' @param period string optional Period: "30days", "90days", "month", or custom dates
' @param format string optional Output format: "chat", "pdf", "email"
' @returns Statement data or download link
ON TOOL "get_statement"
period = GET TOOL PARAM "period"
format = GET TOOL PARAM "format"
user_id = GET user_id
account = FIRST(FIND "accounts.csv" WHERE user_id = user_id)
IF period = "" THEN
TALK "Select the period for your statement:"
ADD SUGGESTION "Last 30 days"
ADD SUGGESTION "Last 90 days"
ADD SUGGESTION "This month"
ADD SUGGESTION "Custom dates"
HEAR period_choice AS "30 days", "90 days", "This month", "Custom"
IF period_choice = "Custom" THEN
TALK "Enter start date:"
HEAR start_date AS DATE
TALK "Enter end date:"
HEAR end_date AS DATE
ELSEIF period_choice = "30 days" THEN
start_date = DATEADD(NOW(), -30, "day")
end_date = NOW()
ELSEIF period_choice = "90 days" THEN
start_date = DATEADD(NOW(), -90, "day")
end_date = NOW()
ELSE
start_date = DATEADD(NOW(), -DAY(NOW()) + 1, "day")
end_date = NOW()
END IF
END IF
' Get transactions
transactions = FIND "transactions.csv" WHERE account_number = account.account_number AND date >= start_date AND date <= end_date ORDER BY date DESC
IF LEN(transactions) = 0 THEN
TALK "No transactions found for this period."
RETURN "NO_TRANSACTIONS"
END IF
TALK ""
TALK "📋 **Account Statement**"
TALK "Period: " + FORMAT(start_date, "dd/MM/yyyy") + " to " + FORMAT(end_date, "dd/MM/yyyy")
TALK "Account: " + account.account_number
TALK ""
total_in = 0
total_out = 0
FOR EACH tx IN transactions
IF tx.amount > 0 THEN
icon = "💵 +"
total_in = total_in + tx.amount
ELSE
icon = "💸 "
total_out = total_out + ABS(tx.amount)
END IF
TALK icon + "R$ " + FORMAT(ABS(tx.amount), "#,##0.00") + " | " + FORMAT(tx.date, "dd/MM")
TALK " " + tx.description
NEXT
TALK ""
TALK "📊 **Summary**"
TALK "Total In: R$ " + FORMAT(total_in, "#,##0.00")
TALK "Total Out: R$ " + FORMAT(total_out, "#,##0.00")
TALK "Net: R$ " + FORMAT(total_in - total_out, "#,##0.00")
IF format = "pdf" OR format = "email" THEN
TALK ""
TALK "Would you like me to send this statement to your email?"
ADD SUGGESTION "Yes, send email"
ADD SUGGESTION "No, thanks"
HEAR send_email AS BOOLEAN
IF send_email THEN
customer = FIRST(FIND "customers.csv" WHERE id = user_id)
SEND MAIL customer.email, "Your General Bank Statement", "Please find attached your account statement.", "statement.pdf"
TALK "📧 Statement sent to your email!"
END IF
END IF
RETURN "SUCCESS"
END ON
' ============================================================================
' FALLBACK - Let LLM handle anything not covered by tools
' ============================================================================
' The LLM will use the available tools based on user intent
' No need for rigid menu systems - natural conversation flow
ADD BOT "card-services" WITH TRIGGER "card, credit card, debit card, block card, limit"
USE KB "banking-faq"
CLEAR SUGGESTIONS
ADD SUGGESTION "balance" AS "Check my balance"
ADD SUGGESTION "transfer" AS "Make a transfer"
ADD SUGGESTION "pix" AS "Send PIX"
ADD SUGGESTION "bills" AS "Pay a bill"
ADD SUGGESTION "card" AS "Card services"
ADD SUGGESTION "history" AS "Transaction history"
ADD SUGGESTION "invest" AS "Investment options"
ADD SUGGESTION "loan" AS "Loan information"
SET CONTEXT "You are a professional banking assistant for General Bank. Help customers with accounts, transfers, payments, cards, loans, and investments. Always verify identity before sensitive operations. Be helpful and secure. Never ask for full card numbers or passwords in chat."
BEGIN TALK
**General Bank** - Digital Banking Assistant
Welcome! I can help you with:
Account balance and statements
Transfers and PIX
Bill payments
Card services
Investments
Loans and financing
Select an option below or tell me what you need.
END TALK
BEGIN SYSTEM PROMPT
You are a secure banking assistant.
Security rules:
- Never display full account numbers
- Mask card numbers showing only last 4 digits
- Require confirmation for transactions over $1000
- Log all sensitive operations
- Escalate fraud concerns immediately
END SYSTEM PROMPT

View file

@ -1,16 +1,13 @@
REM SET SCHEDULE "1 * * * * *"
SET SCHEDULE "1 * * * * *"
billing = FIND "Orders"
REM Monthly consumption of bars.
' Monthly consumption
data = SELECT SUM(UnitPrice * Quantity) as Value, MONTH(OrderDate)+'/'+YEAR(OrderDate) from billing GROUP BY MONTH(OrderDate), YEAR(OrderDate)
img = CHART "timseries", data
img = CHART "timeseries", data
SEND FILE img, "Monthly Consumption"
REM Product Category
' Product Category
data = SELECT SUM(UnitPrice * Quantity) as Value, CategoryName from billing JOIN Products ON billing.ProductID = Products.ProductID JOIN Categories ON Products.CategoryID = Categories.CategoryID GROUP BY CategoryName
img = CHART "donut", data
img = CHART "donut", data
SEND FILE img, "Product Category"

View file

@ -1,11 +1,13 @@
REM Monthly consumption of bars (Individual sending to each customer)
' Individual customer report generation
customers = FIND "Customers"
FOR EACH c IN customers
data = SELECT SUM(UnitPrice * Quantity) as Value, MONTH(OrderDate)+'/'+YEAR(OrderDate) from billing
JOIN Customers ON billing.CustomerID = Customers.CustomerID
GROUP BY MONTH(OrderDate), YEAR(OrderDate)
WHERE Customers.CustomerID = c.CustomerID
img = CHART "timseries", data
data = SELECT SUM(UnitPrice * Quantity) AS Value, MONTH(OrderDate)+'/'+YEAR(OrderDate) FROM billing
JOIN Customers ON billing.CustomerID = Customers.CustomerID
GROUP BY MONTH(OrderDate), YEAR(OrderDate)
WHERE Customers.CustomerID = c.CustomerID
img = CHART "timeseries", data
SEND FILE img, "Monthly Consumption"
END FOR

View file

@ -1,32 +1,33 @@
PARAM sku AS STRING LIKE "ABC123" DESCRIPTION "Product SKU code to update stock"
PARAM qtd AS INTEGER LIKE 10 DESCRIPTION "Quantity to add to stock"
DESCRIPTION "Add stock quantity for a product by SKU"
person = FIND "People.xlsx", "id=" + mobile
vendor = FIND "maria.Vendedores", "id=" + person.erpId
TALK "Olá " + vendor.Contato_Nome + "!"
REM Estoque pelo nome em caso de não presente na planilha
TALK "Qual o SKU do Produto?"
HEAR sku
produto = FIND "maria.Produtos", "sku=" + sku
TALK "Qual a quantidade que se deseja acrescentar?"
HEAR qtd
IF NOT produto THEN
TALK "Produto não encontrado."
RETURN
END IF
estoque = {
produto: {
id: produto.Id
},
deposito: {
id: person.deposito_Id
},
preco: produto.Preco,
operacao: "B",
quantidade: qtd,
observacoes: "Acréscimo de estoque."
}
WITH estoque
produto = { id: produto.Id }
deposito = { id: person.deposito_Id }
preco = produto.Preco
operacao = "B"
quantidade = qtd
observacoes = "Acréscimo de estoque."
END WITH
rec = POST host + "/estoques", estoque
TALK "Estoque atualizado, obrigado."
TALK TO admin1, "Estoque do ${sku} foi atualizado com ${qtd}."
TALK TO admin2, "Estoque do ${sku} foi atualizado com ${qtd}."
TALK "Estoque atualizado."
TALK TO admin1, "Estoque do ${sku} atualizado com ${qtd}."
TALK TO admin2, "Estoque do ${sku} atualizado com ${qtd}."
RETURN rec

View file

@ -1,38 +1,53 @@
ALLOW ROLE "analiseDados"
BEGIN TALK
Exemplos de perguntas para o *BlingBot*:
ADD TOOL "sync-erp"
ADD TOOL "sync-inventory"
ADD TOOL "refresh-llm"
1. Quais são os produtos que têm estoque excessivo em uma loja e podem ser transferidos para outra loja com menor estoque?
CLEAR SUGGESTIONS
2. Quais são os 10 produtos mais vendidos na loja {nome_loja} no período {periodo}?
3. Qual é o ticket médio da loja {nome_loja}?
4. Qual a quantidade disponível do produto {nome_produto} na loja {nome_loja}?
5. Quais produtos precisam ser transferidos da loja {origem} para a loja {destino}?
6. Quais produtos estão com estoque crítico na loja {nome_loja}?
7. Qual a sugestão de compra para o fornecedor {nome_fornecedor}?
8. Quantos pedidos são realizados por dia na loja {nome_loja}?
9. Quantos produtos ativos existem no sistema?
10. Qual o estoque disponível na loja {nome_loja}?
END TALK
REM SET SCHEDULE
ADD SUGGESTION "estoque" AS "Produtos com estoque excessivo"
ADD SUGGESTION "vendas" AS "Top 10 produtos vendidos"
ADD SUGGESTION "ticket" AS "Ticket médio por loja"
ADD SUGGESTION "critico" AS "Estoque crítico"
ADD SUGGESTION "transferir" AS "Sugestão de transferência"
ADD SUGGESTION "compra" AS "Sugestão de compra"
SET CONTEXT "As lojas B, L e R estão identificadas no final dos nomes das colunas da tabela de Análise de Compras. Dicionário de dados AnaliseCompras.qtEstoqueL: Descrição quantidade do Leblon. AnaliseCompras.qtEstoqueB: Descrição quantidade da Barra AnaliseCompras.qtEstoqueR: Descrição quantidade do Rio Sul. Com base no comportamento de compra registrado, analise os dados fornecidos para identificar oportunidades de otimização de estoque. Aplique regras básicas de transferência de produtos entre as lojas, considerando a necessidade de balanceamento de inventário. Retorne um relatório das 10 ações mais críticas, detalhe a movimentação sugerida para cada produto. Deve indicar a loja de origem, a loja de destino e o motivo da transferência. A análise deve ser objetiva e pragmática, focando na melhoria da disponibilidade de produtos nas lojas. Sempre use LIKE %% para comparar nomes. IMPORTANTE: Compare sempre com a função LOWER ao filtrar valores, em ambos os operandos de texto em SQL, para ignorar case, exemplo WHERE LOWER(loja.nome) LIKE LOWER(%Leblon%)."
SET ANSWER MODE "sql"
TALK "Pergunte-me qualquer coisa sobre os seus dados."
BEGIN TALK
**BlingBot - Análise de Dados**
REM IF mobile = "5521992223002" THEN
REM ELSE
REM TALK "Não autorizado."
REM END IF
Exemplos de perguntas:
Produtos com estoque excessivo para transferência
Top 10 produtos vendidos em {loja} no {período}
Ticket médio da loja {nome}
Estoque disponível do produto {nome} na loja {loja}
Produtos para transferir de {origem} para {destino}
Estoque crítico na loja {nome}
Sugestão de compra para fornecedor {nome}
Pedidos por dia na loja {nome}
Total de produtos ativos no sistema
END TALK
BEGIN SYSTEM PROMPT
You are a data analyst for retail inventory management using Bling ERP.
Data available:
- AnaliseCompras table with stock by store (B=Barra, L=Leblon, R=Rio Sul)
- Products, Orders, Suppliers, Inventory tables
Analysis capabilities:
- Stock optimization and transfer suggestions
- Sales performance by store and period
- Average ticket calculation
- Critical stock alerts
- Purchase recommendations
Always use LOWER() for text comparisons in SQL.
Use LIKE with %% for partial matches.
Return actionable insights with specific quantities and locations.
END SYSTEM PROMPT

View file

@ -1,2 +1,7 @@
SET SCHEDULE "0 0 21 * * *"
DESCRIPTION "Refresh data analysis context for LLM"
REFRESH "data-analysis"
TALK "Data analysis context refreshed."

View file

@ -1,16 +1,36 @@
TALK O BlingBot deseja boas-vindas!
TALK Qual o seu pedido?
ADD TOOL "add-stock"
ADD TOOL "sync-erp"
ADD TOOL "sync-inventory"
ADD TOOL "sync-accounts"
ADD TOOL "sync-suppliers"
ADD TOOL "data-analysis"
ADD TOOL "refresh-llm"
CLEAR SUGGESTIONS
ADD SUGGESTION "estoque" AS "Consultar estoque"
ADD SUGGESTION "pedido" AS "Fazer pedido"
ADD SUGGESTION "sync" AS "Sincronizar ERP"
ADD SUGGESTION "analise" AS "Análise de dados"
BEGIN TALK
**BlingBot** - Assistente ERP
Olá! Posso ajudar com:
📦 Consulta de estoque
🛒 Pedidos e vendas
🔄 Sincronização com Bling
📊 Análise de dados
Qual o seu pedido?
END TALK
BEGIN SYSTEM PROMPT
Você deve atuar como um chatbot funcionário da loja integrada ao Bling ERP, respeitando as seguintes regras:
Você é um assistente de loja integrado ao Bling ERP.
Sempre que o atendente fizer um pedido, ofereça as condições de cor e tamanho presentes no JSON de produtos.
A cada pedido realizado, retorne JSON similar ao JSONPedidosExemplo adicionados e o nome do cliente.
Mantenha itensPedido com apenas um item.
É importante usar o mesmo id do JSON de produtos fornecido, para haver a correlação dos objetos.
ItensAcompanhamento deve conter a coleção de itens de acompanhamento do pedido, que é solicitado quando o pedido é feito, por exemplo: Quadro, com Caixa de Giz.
Ao receber pedido, ofereça opções de cor e tamanho do JSON de produtos.
Retorne JSON do pedido com itens e nome do cliente.
Mantenha itensPedido com apenas um item por vez.
Use o mesmo id do JSON de produtos para correlação.
ItensAcompanhamento contém itens adicionais do pedido (ex: Quadro com Caixa de Giz).
END SYSTEM PROMPT

View file

@ -1,19 +1,11 @@
REM Executa a cada dois dias, 23h.
SET SCHEDULE "0 0 0 */2 * *"
REM Variables from config.csv: admin1, admin2, host, limit, pages
REM Using admin1 for notifications
admin = admin1
REM Pagination settings for Bling API
pageVariable = "pagina"
limitVariable = "limite"
syncLimit = 100
REM ============================================
REM Sync Contas a Receber (Accounts Receivable)
REM ============================================
SEND EMAIL admin, "Sincronizando Contas a Receber..."
' Contas a Receber
SEND EMAIL admin, "Syncing Accounts Receivable..."
page = 1
totalReceber = 0
@ -46,12 +38,10 @@ DO WHILE page > 0 AND page <= pages
items = null
LOOP
SEND EMAIL admin, "Contas a Receber sincronizadas: " + totalReceber + " registros."
SEND EMAIL admin, "Accounts Receivable: " + totalReceber + " records."
REM ============================================
REM Sync Contas a Pagar (Accounts Payable)
REM ============================================
SEND EMAIL admin, "Sincronizando Contas a Pagar..."
' Contas a Pagar
SEND EMAIL admin, "Syncing Accounts Payable..."
page = 1
totalPagar = 0
@ -84,9 +74,5 @@ DO WHILE page > 0 AND page <= pages
items = null
LOOP
SEND EMAIL admin, "Contas a Pagar sincronizadas: " + totalPagar + " registros."
REM ============================================
REM Summary
REM ============================================
SEND EMAIL admin, "Transferência do ERP (Contas) para BlingBot concluído. Total: " + (totalReceber + totalPagar) + " registros."
SEND EMAIL admin, "Accounts Payable: " + totalPagar + " records."
SEND EMAIL admin, "Accounts sync completed. Total: " + (totalReceber + totalPagar) + " records."

View file

@ -1,5 +1,4 @@
REM Geral
REM SET SCHEDULE "0 30 22 * * *"
SET SCHEDULE "0 30 22 * * *"
daysToSync = -7
ontem = DATEADD today, "days", daysToSync
@ -7,22 +6,19 @@ ontem = FORMAT ontem, "yyyy-MM-dd"
tomorrow = DATEADD today, "days", 1
tomorrow = FORMAT tomorrow, "yyyy-MM-dd"
dateFilter = "&dataAlteracaoInicial=${ontem}&dataAlteracaoFinal=${tomorrow}"
admin = admin1
SEND EMAIL admin, "Sincronismo: ${ontem} e ${tomorrow} (${daysToSync * -1} dia(s)) iniciado..."
SEND EMAIL admin, "Sync: ${ontem} to ${tomorrow} started..."
REM Produtos
' Produtos
i = 1
SEND EMAIL admin, "Sincronizando Produtos..."
SEND EMAIL admin, "Syncing Products..."
DO WHILE i > 0 AND i < pages
REM ${dateFilter}
res = GET host + "/produtos?pagina=${i}&criterio=5&tipo=P&limite=${limit}${dateFilter}"
WAIT 0.33
list = res.data
res = null
REM Sincroniza itens de Produto
prd1 = ""
j = 0
k = 0
@ -70,10 +66,8 @@ DO WHILE i > 0 AND i < pages
MERGE "maria.Produtos" WITH list BY "Id"
list = items
REM Calcula ids de produtos
j = 0
DO WHILE j < ubound(list)
REM Varre todas as variações.
listV = list[j].variacoes
IF listV THEN
k = 0
@ -94,32 +88,30 @@ DO WHILE i > 0 AND i < pages
listV[k].hierarquia = 'f'
DELETE "maria.ProdutoImagem", "sku=" + listV[k].sku
REM Sincroniza images da variação.
images = listV[k]?.midia?.imagens?.externas
l = 0
DO WHILE l < ubound(images)
images[l].ordinal = k
images[l].sku = listV[k].sku
images[l].id= random()
images[l].id = random()
l = l + 1
LOOP
SAVE "maria.ProdutoImagem", images
images=null
images = null
k = k + 1
LOOP
MERGE "maria.Produtos" WITH listV BY "Id"
END IF
listV=null
listV = null
REM Sincroniza images do produto raiz.
DELETE "maria.ProdutoImagem", "sku=" + list[j].sku
k = 0
images = list[j].midia?.imagens?.externas
DO WHILE k < ubound(images)
images[k].ordinal = k
images[k].sku = list[j].sku
images[k].id= random()
images[k].id = random()
k = k + 1
LOOP
SAVE "maria.ProdutoImagem", images
@ -130,16 +122,16 @@ DO WHILE i > 0 AND i < pages
IF list?.length < limit THEN
i = 0
END IF
list=null
res=null
items=null
list = null
res = null
items = null
LOOP
SEND EMAIL admin, "Produtos concluído."
SEND EMAIL admin, "Products completed."
RESET REPORT
REM Pedidos
SEND EMAIL admin, "Sincronizando Pedidos..."
' Pedidos
SEND EMAIL admin, "Syncing Orders..."
i = 1
DO WHILE i > 0 AND i < pages
@ -147,7 +139,6 @@ DO WHILE i > 0 AND i < pages
list = res.data
res = null
REM Sincroniza itens
j = 0
fullList = []
@ -156,13 +147,11 @@ DO WHILE i > 0 AND i < pages
res = GET host + "/pedidos/vendas/${pedido_id}"
items = res.data.itens
REM Insere ref. de pedido no item.
k = 0
DO WHILE k < ubound(items)
items[k].pedido_id = pedido_id
items[k].sku = items[k].codigo
items[k].numero = list[j].numero
REM Obter custo do produto fornecedor do fornecedor marcado como default.
items[k].custo = items[k].valor / 2
k = k + 1
LOOP
@ -186,19 +175,19 @@ DO WHILE i > 0 AND i < pages
IF list?.length < limit THEN
i = 0
END IF
list=null
res=null
list = null
res = null
LOOP
SEND EMAIL admin, "Pedidos concluído."
SEND EMAIL admin, "Orders completed."
REM Comuns
pageVariable="pagina"
limitVariable="limite"
' Common entities
pageVariable = "pagina"
limitVariable = "limite"
syncLimit = 100
REM Sincroniza CategoriaReceita
SEND EMAIL admin, "Sincronizando CategoriaReceita..."
' CategoriaReceita
SEND EMAIL admin, "Syncing CategoriaReceita..."
syncPage = 1
totalCategoria = 0
@ -230,10 +219,10 @@ DO WHILE syncPage > 0 AND syncPage <= pages
syncItems = null
LOOP
SEND EMAIL admin, "CategoriaReceita sincronizada: " + totalCategoria + " registros."
SEND EMAIL admin, "CategoriaReceita: " + totalCategoria + " records."
REM Sincroniza Formas de Pagamento
SEND EMAIL admin, "Sincronizando Formas de Pagamento..."
' FormaDePagamento
SEND EMAIL admin, "Syncing Payment Methods..."
syncPage = 1
totalForma = 0
@ -265,17 +254,16 @@ DO WHILE syncPage > 0 AND syncPage <= pages
syncItems = null
LOOP
SEND EMAIL admin, "Formas de Pagamento sincronizadas: " + totalForma + " registros."
SEND EMAIL admin, "Payment Methods: " + totalForma + " records."
REM Contatos
SEND EMAIL admin, "Sincronizando Contatos..."
' Contatos
SEND EMAIL admin, "Syncing Contacts..."
i = 1
DO WHILE i > 0 AND i < pages
res = GET host + "/contatos?pagina=${i}&limite=${limit}${dateFilter} "
res = GET host + "/contatos?pagina=${i}&limite=${limit}${dateFilter}"
list = res.data
REM Sincroniza itens
j = 0
items = NEW ARRAY
@ -292,22 +280,20 @@ DO WHILE i > 0 AND i < pages
IF list?.length < limit THEN
i = 0
END IF
list=null
res=null
list = null
res = null
LOOP
SEND EMAIL admin, "Contatos concluído."
SEND EMAIL admin, "Contacts completed."
REM Vendedores
REM Sincroniza Vendedores.
SEND EMAIL admin, "Sincronizando Vendedores..."
' Vendedores
SEND EMAIL admin, "Syncing Sellers..."
i = 1
DO WHILE i > 0 AND i < pages
res = GET host + "/vendedores?pagina=${i}&situacaoContato=T&limite=${limit}${dateFilter}"
list = res.data
REM Sincroniza itens
j = 0
items = NEW ARRAY
@ -324,10 +310,9 @@ DO WHILE i > 0 AND i < pages
IF list?.length < limit THEN
i = 0
END IF
list=null
res=null
list = null
res = null
LOOP
SEND EMAIL admin, "Vendedores concluído."
SEND EMAIL admin, "Transferência do ERP para BlingBot concluído."
SEND EMAIL admin, "Sellers completed."
SEND EMAIL admin, "ERP sync completed."

View file

@ -1,18 +1,14 @@
REM SET SCHEDULE "0 30 23 * * *"
SET SCHEDULE "0 30 23 * * *"
i = 1
SEND EMAIL admin, "Sincronismo Estoque iniciado..."
SEND EMAIL admin, "Inventory sync started..."
fullList = FIND "maria.Produtos"
REM Initialize chunk parameters
chunkSize = 100
startIndex = 0
REM ubound(fullList)
DO WHILE startIndex < ubound(fullList)
list = mid( fullList, startIndex, chunkSize)
list = mid(fullList, startIndex, chunkSize)
prd1 = ""
j = 0
items = NEW ARRAY
@ -20,12 +16,11 @@ DO WHILE startIndex < ubound(fullList)
DO WHILE j < ubound(list)
produto_id = list[j].id
prd1 = prd1 + "&idsProdutos%5B%5D=" + produto_id
j = j +1
j = j + 1
LOOP
list = null
REM Sincroniza Estoque
IF j > 0 THEN
res = GET host + "/estoques/saldos?${prd1}"
WAIT 0.33
@ -52,15 +47,14 @@ DO WHILE startIndex < ubound(fullList)
END IF
pSku = null
k = k +1
k = k + 1
LOOP
items = null
END IF
REM Update startIndex for the next chunk
startIndex = startIndex + chunkSize
items = null
LOOP
fullList = null
SEND EMAIL admin, "Estoque concluído."
SEND EMAIL admin, "Inventory sync completed."

View file

@ -1,8 +1,10 @@
REM Geral
REM Produto Fornecedor
SET SCHEDULE "0 0 22 * * *"
DESCRIPTION "Sync product suppliers from Bling ERP to local database"
SEND EMAIL admin, "Suppliers sync started..."
FUNCTION SyncProdutoFornecedor(idProduto)
REM Sincroniza ProdutoFornecedor.
DELETE "maria.ProdutoFornecedor", "Produto_id=" + idProduto
i1 = 1
@ -12,8 +14,7 @@ FUNCTION SyncProdutoFornecedor(idProduto)
res = null
WAIT 0.33
REM Sincroniza itens
let j1 = 0
j1 = 0
items1 = NEW ARRAY
DO WHILE j1 < ubound(list1)
@ -26,48 +27,36 @@ FUNCTION SyncProdutoFornecedor(idProduto)
LOOP
SAVE "maria.ProdutoFornecedor", items1
items1= null
items1 = null
i1 = i1 + 1
IF list1?.length < limit THEN
i1 = 0
END IF
res=null
list1=null
res = null
list1 = null
LOOP
END FUNCTION
i = 1
SEND EMAIL admin, "Sincronismo Fornecedores iniciado..."
fullList = FIND "maria.Produtos"
REM Initialize chunk parameters
chunkSize = 100
startIndex = 0
REM ubound(fullList)
DO WHILE startIndex < ubound(fullList)
list = mid( fullList, startIndex, chunkSize)
list = mid(fullList, startIndex, chunkSize)
REM Sincroniza itens de Produto
prd1 = ""
j = 0
items = NEW ARRAY
DO WHILE j < ubound(list)
produto_id = list[j].id
prd1 = prd1 + "&idsProdutos%5B%5D=" + produto_id
CALL SyncProdutoFornecedor(produto_id)
j = j +1
j = j + 1
LOOP
list = null
REM Update startIndex for the next chunk
startIndex = startIndex + chunkSize
items = null
LOOP
fullList = null
SEND EMAIL admin, "Fornecedores concluído."
SEND EMAIL admin, "Suppliers sync completed."

View file

@ -1,10 +1,51 @@
list = FIND "broadcast.csv"
PARAM message AS STRING LIKE "Hello {name}, how are you?" DESCRIPTION "Message to broadcast, supports {name} and {mobile} variables"
PARAM listfile AS STRING LIKE "broadcast.csv" DESCRIPTION "CSV file with contacts (name, mobile columns)"
PARAM filter AS STRING LIKE "status=active" DESCRIPTION "Filter condition for contact list" OPTIONAL
DESCRIPTION "Send broadcast message to a list of contacts from CSV file"
IF NOT listfile THEN
listfile = "broadcast.csv"
END IF
IF filter THEN
list = FIND listfile, filter
ELSE
list = FIND listfile
END IF
IF UBOUND(list) = 0 THEN
TALK "No contacts found in " + listfile
RETURN 0
END IF
index = 1
sent = 0
DO WHILE index < UBOUND(list)
row = list[index]
TALK TO row.mobile, "Hi, " + row.name + ". How are you? How about *General Bots* deployed?"
WAIT 5
SAVE "Log.xlsx", TODAY, NOW, USERNAME, FROM, row.mobile, row.name
index = index + 1
row = list[index]
msg = REPLACE(message, "{name}", row.name)
msg = REPLACE(msg, "{mobile}", row.mobile)
TALK TO row.mobile, msg
WAIT 5
WITH logEntry
timestamp = NOW()
user = USERNAME
from = FROM
mobile = row.mobile
name = row.name
status = "sent"
END WITH
SAVE "Log.xlsx", logEntry
sent = sent + 1
index = index + 1
LOOP
TALK "The broadcast has been sent."
TALK "Broadcast sent to " + sent + " contacts."
RETURN sent

View file

@ -2,7 +2,7 @@
' LGPD Art. 18, GDPR Art. 17, HIPAA (where applicable)
' This dialog handles user requests to delete their personal data
TALK "🔒 **Data Deletion Request**"
TALK "Data Deletion Request"
TALK "I can help you exercise your right to have your personal data deleted."
TALK "This is also known as the 'Right to be Forgotten' under LGPD and GDPR."
@ -13,7 +13,7 @@ HEAR email AS EMAIL WITH "Please enter your registered email address:"
' Verify email exists in system
user = FIND "users.csv" WHERE email = email
IF user IS NULL THEN
TALK "⚠️ I couldn't find an account with that email address."
TALK "I couldn't find an account with that email address."
TALK "Please check the email and try again, or contact support@company.com"
EXIT
END IF
@ -37,35 +37,35 @@ HEAR entered_code AS INTEGER WITH "I've sent a verification code to your email.
stored_code = GET BOT MEMORY "verification_" + email
IF entered_code <> stored_code THEN
TALK "Invalid verification code. Please try again."
TALK "Invalid verification code. Please try again."
EXIT
END IF
TALK "Identity verified."
TALK "Identity verified."
TALK ""
TALK "**What data would you like to delete?**"
TALK "What data would you like to delete?"
TALK ""
TALK "1️⃣ All my personal data (complete account deletion)"
TALK "2️⃣ Conversation history only"
TALK "3️⃣ Files and documents only"
TALK "4️⃣ Activity logs and analytics"
TALK "5️⃣ Specific data categories (I'll choose)"
TALK "6️⃣ Cancel this request"
TALK "1. All my personal data (complete account deletion)"
TALK "2. Conversation history only"
TALK "3. Files and documents only"
TALK "4. Activity logs and analytics"
TALK "5. Specific data categories (I'll choose)"
TALK "6. Cancel this request"
HEAR deletion_choice AS INTEGER WITH "Please enter your choice (1-6):"
SELECT CASE deletion_choice
CASE 1
deletion_type = "complete"
TALK "⚠️ **Complete Account Deletion**"
TALK "Complete Account Deletion"
TALK "This will permanently delete:"
TALK " Your user profile and account"
TALK " All conversation history"
TALK " All uploaded files and documents"
TALK " All activity logs"
TALK " All preferences and settings"
TALK "- Your user profile and account"
TALK "- All conversation history"
TALK "- All uploaded files and documents"
TALK "- All activity logs"
TALK "- All preferences and settings"
TALK ""
TALK "**This action cannot be undone.**"
TALK "This action cannot be undone."
CASE 2
deletion_type = "conversations"
@ -95,12 +95,12 @@ END SELECT
' Explain data retention exceptions
TALK ""
TALK "📋 **Legal Notice:**"
TALK "Legal Notice:"
TALK "Some data may be retained for legal compliance purposes:"
TALK " Financial records (tax requirements)"
TALK " Legal dispute documentation"
TALK " Fraud prevention records"
TALK " Regulatory compliance data"
TALK "- Financial records (tax requirements)"
TALK "- Legal dispute documentation"
TALK "- Fraud prevention records"
TALK "- Regulatory compliance data"
TALK ""
TALK "Retained data will be minimized and protected according to law."
@ -169,13 +169,13 @@ Dear User,
Your data deletion request has been received and processed.
**Request Details:**
Request Details:
- Request ID: " + request_id + "
- Request Date: " + FORMAT(request_date, "YYYY-MM-DD HH:mm") + "
- Deletion Type: " + deletion_type + "
- Status: Completed
**What happens next:**
What happens next:
" + IF(deletion_type = "complete", "
- Your account will be fully deleted within 30 days
- You will receive a final confirmation email
@ -185,7 +185,7 @@ Your data deletion request has been received and processed.
- Some backups may take up to 30 days to purge
") + "
**Your Rights:**
Your Rights:
- You can request a copy of any retained data
- You can file a complaint with your data protection authority
- Contact us at privacy@company.com for questions
@ -202,10 +202,10 @@ Request Reference: " + request_id + "
"
TALK ""
TALK "✅ **Request Completed**"
TALK "Request Completed"
TALK ""
TALK "Your deletion request has been processed."
TALK "Request ID: **" + request_id + "**"
TALK "Request ID: " + request_id
TALK ""
TALK "A confirmation email has been sent to " + email
TALK ""

View file

@ -5,7 +5,7 @@
' This dialog handles user requests to access their personal data
' Companies can install this template for LGPD/GDPR compliance
TALK "📋 **Data Access Request**"
TALK "Data Access Request"
TALK "You have the right to access all personal data we hold about you."
TALK ""
@ -16,7 +16,7 @@ HEAR email AS EMAIL WITH "Please provide your registered email address:"
' Check if email exists in system
user = FIND "users" WHERE email = email
IF user IS NULL THEN
TALK "We couldn't find an account with that email address."
TALK "We couldn't find an account with that email address."
TALK "Please check the email and try again, or contact support."
EXIT
END IF
@ -34,18 +34,18 @@ This code expires in 15 minutes.
If you did not request this, please ignore this email.
"
HEAR entered_code AS TEXT WITH "📧 We sent a verification code to your email. Please enter it:"
HEAR entered_code AS TEXT WITH "We sent a verification code to your email. Please enter it:"
IF entered_code <> code THEN
TALK "Invalid verification code. Please start over."
TALK "Invalid verification code. Please start over."
EXIT
END IF
TALK "Identity verified successfully!"
TALK "Identity verified successfully!"
TALK ""
' Gather all user data
TALK "🔍 Gathering your personal data... This may take a moment."
TALK "Gathering your personal data... This may take a moment."
TALK ""
' Get user profile data
@ -131,22 +131,22 @@ INSERT INTO "privacy_requests" VALUES {
"legal_basis": "LGPD Art. 18 / GDPR Art. 15"
}
TALK "✅ **Request Complete!**"
TALK "Request Complete!"
TALK ""
TALK "📧 We have sent a comprehensive report to: " + email
TALK "We have sent a comprehensive report to: " + email
TALK ""
TALK "The report includes:"
TALK " Your profile information"
TALK " " + COUNT(sessions) + " session records"
TALK " " + COUNT(messages) + " message records"
TALK " " + COUNT(files) + " files"
TALK " Consent history"
TALK " Activity logs"
TALK "- Your profile information"
TALK "- " + COUNT(sessions) + " session records"
TALK "- " + COUNT(messages) + " message records"
TALK "- " + COUNT(files) + " files"
TALK "- Consent history"
TALK "- Activity logs"
TALK ""
TALK "You can also download the report from your account settings."
TALK ""
TALK "🔒 **Your other privacy rights:**"
TALK "• Say **'correct my data'** to update your information"
TALK "• Say **'delete my data'** to request data erasure"
TALK "• Say **'export my data'** for portable format"
TALK "• Say **'privacy settings'** to manage consents"
TALK "Your other privacy rights:"
TALK "- Say 'correct my data' to update your information"
TALK "- Say 'delete my data' to request data erasure"
TALK "- Say 'export my data' for portable format"
TALK "- Say 'privacy settings' to manage consents"

View file

@ -1,39 +1,51 @@
' =============================================================================
' Privacy Rights Center - LGPD/GDPR Compliance Dialog
' General Bots Template for Data Subject Rights Management
' =============================================================================
' This template helps organizations comply with:
' - LGPD (Lei Geral de Proteção de Dados - Brazil)
' - GDPR (General Data Protection Regulation - EU)
' - CCPA (California Consumer Privacy Act)
' =============================================================================
ADD TOOL "request-data"
ADD TOOL "export-data"
ADD TOOL "delete-data"
ADD TOOL "manage-consents"
ADD TOOL "rectify-data"
ADD TOOL "object-processing"
TALK "Welcome to the Privacy Rights Center. I can help you exercise your data protection rights."
TALK "As a data subject, you have the following rights under LGPD/GDPR:"
USE KB "privacy.gbkb"
TALK "1. Right of Access - View all data we hold about you"
TALK "2. Right to Rectification - Correct inaccurate data"
TALK "3. Right to Erasure - Request deletion of your data"
TALK "4. Right to Portability - Export your data"
TALK "5. Right to Object - Opt-out of certain processing"
TALK "6. Consent Management - Review and update your consents"
CLEAR SUGGESTIONS
HEAR choice AS TEXT WITH "What would you like to do? (1-6 or type your request)"
ADD SUGGESTION "access" AS "View my data"
ADD SUGGESTION "export" AS "Export my data"
ADD SUGGESTION "delete" AS "Delete my data"
ADD SUGGESTION "consents" AS "Manage consents"
ADD SUGGESTION "correct" AS "Correct my data"
ADD SUGGESTION "object" AS "Object to processing"
SELECT CASE choice
CASE "1", "access", "view", "see my data"
CALL "access-data.bas"
SET CONTEXT "privacy rights" AS "You are a Privacy Rights Center assistant helping users exercise their data protection rights under LGPD, GDPR, and CCPA. Help with data access, rectification, erasure, portability, and consent management."
CASE "2", "rectification", "correct", "update", "fix"
CALL "rectify-data.bas"
BEGIN TALK
**Privacy Rights Center**
CASE "3", "erasure", "delete", "remove", "forget me"
CALL "erase-data.bas"
As a data subject, you have the following rights:
CASE "4", "portability", "export", "download"
CALL "export-data.bas"
1. **Access** - View all data we hold about you
2. **Rectification** - Correct inaccurate data
3. **Erasure** - Request deletion of your data
4. **Portability** - Export your data
5. **Object** - Opt-out of certain processing
6. **Consent** - Review and update your consents
CASE "5", "object", "opt-out", "stop processing"
CALL "object-processing.bas"
Select an option or describe your request.
END TALK
CASE "6", "consent", "cons
BEGIN SYSTEM PROMPT
You are a Privacy Rights Center assistant for LGPD/GDPR/CCPA compliance.
Data subject rights:
- Right of Access: View all personal data
- Right to Rectification: Correct inaccurate data
- Right to Erasure: Delete personal data (right to be forgotten)
- Right to Portability: Export data in machine-readable format
- Right to Object: Opt-out of marketing, profiling, etc.
- Consent Management: Review and withdraw consents
Always verify identity before processing sensitive requests.
Log all privacy requests for compliance audit.
Provide clear timelines for request fulfillment.
Escalate complex requests to the Data Protection Officer.
END SYSTEM PROMPT

View file

@ -1,85 +1,56 @@
PARAM firstname AS STRING LIKE "John" DESCRIPTION "First name of the contact"
PARAM lastname AS STRING LIKE "Smith" DESCRIPTION "Last name of the contact"
PARAM email AS STRING LIKE "john.smith@company.com" DESCRIPTION "Email address of the contact"
PARAM phone AS STRING LIKE "+1-555-123-4567" DESCRIPTION "Phone number of the contact"
PARAM companyname AS STRING LIKE "Acme Corporation" DESCRIPTION "Company or organization name"
PARAM email AS EMAIL LIKE "john.smith@company.com" DESCRIPTION "Email address"
PARAM phone AS PHONE LIKE "+1-555-123-4567" DESCRIPTION "Phone number"
PARAM companyname AS STRING LIKE "Acme Corporation" DESCRIPTION "Company or organization"
PARAM jobtitle AS STRING LIKE "Sales Manager" DESCRIPTION "Job title or role"
PARAM tags AS STRING LIKE "customer,vip" DESCRIPTION "Optional comma-separated tags"
PARAM notes AS STRING LIKE "Met at conference" DESCRIPTION "Optional notes about the contact"
PARAM tags AS STRING LIKE "customer,vip" DESCRIPTION "Comma-separated tags" OPTIONAL
PARAM notes AS STRING LIKE "Met at conference" DESCRIPTION "Notes about the contact" OPTIONAL
DESCRIPTION "Adds a new contact to the directory. Collects contact information and saves to the contacts database."
DESCRIPTION "Add a new contact to the directory with contact information"
' Validate required fields
IF firstname = "" THEN
TALK "What is the contact's first name?"
HEAR firstname AS STRING
END IF
contactid = "CON-" + FORMAT(NOW(), "YYYYMMDD") + "-" + FORMAT(RANDOM(1000, 9999))
createdat = FORMAT(NOW(), "YYYY-MM-DD HH:mm:ss")
createdby = GET "session.user_email"
fullname = firstname + " " + lastname
IF lastname = "" THEN
TALK "What is the contact's last name?"
HEAR lastname AS STRING
END IF
IF email = "" THEN
TALK "What is the contact's email address?"
HEAR email AS EMAIL
END IF
' Generate contact ID
let contactid = "CON-" + FORMAT NOW() AS "YYYYMMDD" + "-" + FORMAT RANDOM(1000, 9999)
' Set timestamps
let createdat = FORMAT NOW() AS "YYYY-MM-DD HH:mm:ss"
let createdby = GET "session.user_email"
' Build full name
let fullname = firstname + " " + lastname
' Save the contact
SAVE "contacts.csv", contactid, firstname, lastname, fullname, email, phone, companyname, jobtitle, tags, notes, createdby, createdat
' Store in bot memory
SET BOT MEMORY "last_contact", contactid
' If company provided, check if it exists and add if not
IF companyname != "" THEN
let existingcompany = FIND "companies.csv", "name=" + companyname
let companycount = AGGREGATE "COUNT", existingcompany, "id"
IF companyname THEN
existingcompany = FIND "companies.csv", "name=" + companyname
companycount = AGGREGATE "COUNT", existingcompany, "id"
IF companycount = 0 THEN
let companyid = "COMP-" + FORMAT NOW() AS "YYYYMMDD" + "-" + FORMAT RANDOM(1000, 9999)
companyid = "COMP-" + FORMAT(NOW(), "YYYYMMDD") + "-" + FORMAT(RANDOM(1000, 9999))
SAVE "companies.csv", companyid, companyname, createdat
TALK "📝 Note: Company '" + companyname + "' was also added to the directory."
END IF
END IF
' Log activity
let activity = "Contact created: " + fullname
SAVE "contact_activities.csv", contactid, activity, createdby, createdat
WITH activity
contactid = contactid
action = "Contact created: " + fullname
createdby = createdby
createdat = createdat
END WITH
' Respond to user
TALK "✅ **Contact Added Successfully!**"
TALK ""
TALK "**Contact Details:**"
TALK "📋 **ID:** " + contactid
TALK "👤 **Name:** " + fullname
TALK "📧 **Email:** " + email
SAVE "contact_activities.csv", activity
IF phone != "" THEN
TALK "📱 **Phone:** " + phone
TALK "Contact added: " + fullname
TALK "ID: " + contactid
TALK "Email: " + email
IF phone THEN
TALK "Phone: " + phone
END IF
IF companyname != "" THEN
TALK "🏢 **Company:** " + companyname
IF companyname THEN
TALK "Company: " + companyname
END IF
IF jobtitle != "" THEN
TALK "💼 **Title:** " + jobtitle
IF jobtitle THEN
TALK "Title: " + jobtitle
END IF
IF tags != "" THEN
TALK "🏷️ **Tags:** " + tags
END IF
TALK ""
TALK "You can find this contact anytime by searching for their name or email."
RETURN contactid

View file

@ -1,106 +1,69 @@
PARAM searchterm AS STRING LIKE "john" DESCRIPTION "Name, email, company, or phone to search for"
PARAM searchby AS STRING LIKE "all" DESCRIPTION "Optional: Filter by field - all, name, email, company, phone"
PARAM searchby AS STRING LIKE "all" DESCRIPTION "Filter by field: all, name, email, company, phone"
DESCRIPTION "Searches the contact directory by name, email, company, or phone number. Returns matching contacts with their details."
DESCRIPTION "Search contact directory by name, email, company, or phone number"
' Validate search term
IF searchterm = "" THEN
TALK "What would you like to search for? You can enter a name, email, company, or phone number."
HEAR searchterm AS STRING
END IF
IF searchterm = "" THEN
TALK "I need a search term to find contacts."
RETURN
END IF
' Set default search scope
IF searchby = "" THEN
IF NOT searchby THEN
searchby = "all"
END IF
TALK "🔍 Searching contacts for: **" + searchterm + "**..."
TALK ""
TALK "Searching contacts for: " + searchterm
' Search based on field
let results = []
results = []
IF searchby = "all" OR searchby = "name" THEN
let nameresults = FIND "contacts.csv", "fullname LIKE " + searchterm
nameresults = FIND "contacts.csv", "fullname LIKE " + searchterm
results = MERGE results, nameresults
END IF
IF searchby = "all" OR searchby = "email" THEN
let emailresults = FIND "contacts.csv", "email LIKE " + searchterm
emailresults = FIND "contacts.csv", "email LIKE " + searchterm
results = MERGE results, emailresults
END IF
IF searchby = "all" OR searchby = "company" THEN
let companyresults = FIND "contacts.csv", "companyname LIKE " + searchterm
companyresults = FIND "contacts.csv", "companyname LIKE " + searchterm
results = MERGE results, companyresults
END IF
IF searchby = "all" OR searchby = "phone" THEN
let phoneresults = FIND "contacts.csv", "phone LIKE " + searchterm
phoneresults = FIND "contacts.csv", "phone LIKE " + searchterm
results = MERGE results, phoneresults
END IF
' Count results
let resultcount = AGGREGATE "COUNT", results, "contactid"
resultcount = UBOUND(results)
IF resultcount = 0 THEN
TALK "❌ No contacts found matching **" + searchterm + "**"
TALK ""
TALK "💡 **Tips:**"
TALK "• Try a partial name or email"
TALK "• Check spelling"
TALK "• Search by company name"
TALK ""
TALK "Would you like to add a new contact instead?"
TALK "No contacts found matching: " + searchterm
RETURN
END IF
' Display results
IF resultcount = 1 THEN
TALK "✅ Found 1 contact:"
ELSE
TALK "✅ Found " + resultcount + " contacts:"
END IF
TALK "Found " + resultcount + " contact(s):"
TALK ""
' Show each result
FOR EACH contact IN results
TALK "---"
TALK "👤 **" + contact.fullname + "**"
TALK "📧 " + contact.email
TALK "**" + contact.fullname + "**"
TALK contact.email
IF contact.phone != "" THEN
TALK "📱 " + contact.phone
IF contact.phone <> "" THEN
TALK contact.phone
END IF
IF contact.companyname != "" THEN
TALK "🏢 " + contact.companyname
IF contact.companyname <> "" THEN
TALK contact.companyname
END IF
IF contact.jobtitle != "" THEN
TALK "💼 " + contact.jobtitle
IF contact.jobtitle <> "" THEN
TALK contact.jobtitle
END IF
IF contact.tags != "" THEN
TALK "🏷️ " + contact.tags
END IF
TALK "ID: " + contact.contactid
NEXT
TALK "📋 ID: " + contact.contactid
TALK ""
NEXT contact
' Store first result in memory for quick reference
IF resultcount > 0 THEN
let firstcontact = FIRST results
firstcontact = FIRST results
SET BOT MEMORY "last_contact", firstcontact.contactid
SET BOT MEMORY "last_search", searchterm
END IF
TALK "---"
TALK "💡 To view more details or update a contact, just ask!"
RETURN results

View file

@ -1,10 +1,3 @@
' Contact Directory Template - Start Script
' Manages contacts, companies, and contact information
' ============================================================================
' SETUP TOOLS - Register available contact management tools
' ============================================================================
ADD TOOL "add-contact"
ADD TOOL "search-contact"
ADD TOOL "update-contact"
@ -12,21 +5,9 @@ ADD TOOL "list-contacts"
ADD TOOL "add-company"
ADD TOOL "contact-history"
' ============================================================================
' SETUP KNOWLEDGE BASE
' ============================================================================
USE KB "contacts.gbkb"
' ============================================================================
' SET CONTEXT FOR AI
' ============================================================================
SET CONTEXT "contact directory" AS "You are a contact management assistant helping organize and search contacts. You can help with: adding new contacts, searching the directory, updating contact information, managing company records, and viewing contact history. Always maintain data accuracy and help users find contacts quickly."
' ============================================================================
' SETUP SUGGESTIONS
' ============================================================================
SET CONTEXT "contact directory" AS "You are a contact management assistant helping organize and search contacts. Help with adding new contacts, searching the directory, updating contact information, managing company records, and viewing contact history."
CLEAR SUGGESTIONS
@ -36,50 +17,26 @@ ADD SUGGESTION "companies" AS "View companies"
ADD SUGGESTION "recent" AS "Recent contacts"
ADD SUGGESTION "export" AS "Export contacts"
' ============================================================================
' WELCOME MESSAGE
' ============================================================================
BEGIN TALK
📇 **Contact Directory**
**Contact Directory**
Welcome! I'm your contact management assistant.
I can help you with:
Add new contacts and companies
Search by name, email, or company
Update contact information
Manage company records
View contact history
Export contact lists
**What I can help you with:**
Add new contacts and companies
🔍 Search contacts by name, email, or company
Update contact information
🏢 Manage company records
📋 View contact history and notes
📤 Export contact lists
Just tell me what you need or select an option below!
Select an option or tell me what you need.
END TALK
' ============================================================================
' SYSTEM PROMPT FOR AI INTERACTIONS
' ============================================================================
BEGIN SYSTEM PROMPT
You are a contact directory assistant. Your responsibilities include:
You are a contact directory assistant.
1. Helping users add and manage contact records
2. Searching contacts efficiently by any field
3. Maintaining accurate contact information
4. Organizing contacts by company and tags
5. Tracking interaction history with contacts
Contact fields: name, email, phone, company, job title, address, tags, notes.
Contact fields include:
- Name (first and last)
- Email address
- Phone number
- Company name
- Job title
- Address
- Tags/categories
- Notes
Always confirm before making changes to existing contacts.
When searching, be flexible with partial matches.
Suggest adding missing information when appropriate.
Confirm before making changes to existing contacts.
Be flexible with partial matches when searching.
Suggest adding missing information when appropriate.
END SYSTEM PROMPT

View file

@ -1,53 +1,27 @@
PARAM dealname AS STRING LIKE "Acme Corp Enterprise License" DESCRIPTION "Name of the deal or opportunity"
PARAM companyname AS STRING LIKE "Acme Corporation" DESCRIPTION "Company or account name"
PARAM contactemail AS STRING LIKE "john@acme.com" DESCRIPTION "Primary contact email"
PARAM dealvalue AS NUMBER LIKE 50000 DESCRIPTION "Estimated deal value in dollars"
PARAM stage AS STRING LIKE "Lead" DESCRIPTION "Initial stage: Lead, Qualified, Proposal, Negotiation"
PARAM closedate AS STRING LIKE "2025-03-30" DESCRIPTION "Expected close date"
PARAM notes AS STRING LIKE "Met at trade show" DESCRIPTION "Optional notes about the deal"
PARAM contactemail AS EMAIL LIKE "john@acme.com" DESCRIPTION "Primary contact email"
PARAM dealvalue AS MONEY LIKE 50000 DESCRIPTION "Estimated deal value in dollars"
PARAM stage AS STRING LIKE "Lead" DESCRIPTION "Initial stage: Lead, Qualified, Proposal, Negotiation" OPTIONAL
PARAM closedate AS DATE LIKE "2025-03-30" DESCRIPTION "Expected close date" OPTIONAL
PARAM notes AS STRING LIKE "Met at trade show" DESCRIPTION "Notes about the deal" OPTIONAL
DESCRIPTION "Creates a new sales deal in the pipeline. Collects deal information and saves to the deals database."
DESCRIPTION "Create a new sales deal in the pipeline with deal information and value tracking"
' Validate required fields
IF dealname = "" THEN
TALK "What is the name of this deal?"
HEAR dealname AS STRING
END IF
IF companyname = "" THEN
TALK "What company is this deal with?"
HEAR companyname AS STRING
END IF
IF contactemail = "" THEN
TALK "What is the primary contact's email?"
HEAR contactemail AS EMAIL
END IF
IF dealvalue = 0 THEN
TALK "What is the estimated deal value?"
HEAR dealvalue AS NUMBER
END IF
' Set defaults for optional fields
IF stage = "" THEN
IF NOT stage THEN
stage = "Lead"
END IF
IF closedate = "" THEN
closedate = FORMAT DATEADD(TODAY(), 30, "day") AS "YYYY-MM-DD"
IF NOT closedate THEN
closedate = DATEADD(TODAY(), 30, "day")
END IF
' Generate deal ID
let dealid = "DEAL-" + FORMAT NOW() AS "YYYYMMDD" + "-" + FORMAT RANDOM(1000, 9999)
dealid = "DEAL-" + FORMAT(NOW(), "YYYYMMDD") + "-" + FORMAT(RANDOM(1000, 9999))
createdat = FORMAT(NOW(), "YYYY-MM-DD HH:mm:ss")
ownerid = GET "session.user_id"
owneremail = GET "session.user_email"
' Set timestamps and owner
let createdat = FORMAT NOW() AS "YYYY-MM-DD HH:mm:ss"
let ownerid = GET "session.user_id"
let owneremail = GET "session.user_email"
let probability = 10
' Set probability based on stage
probability = 10
IF stage = "Qualified" THEN
probability = 25
ELSE IF stage = "Proposal" THEN
@ -56,30 +30,43 @@ ELSE IF stage = "Negotiation" THEN
probability = 75
END IF
' Calculate weighted value
let weightedvalue = dealvalue * probability / 100
weightedvalue = dealvalue * probability / 100
' Save the deal
SAVE "deals.csv", dealid, dealname, companyname, contactemail, dealvalue, stage, closedate, probability, weightedvalue, notes, ownerid, owneremail, createdat
WITH deal
id = dealid
name = dealname
company = companyname
contact = contactemail
value = dealvalue
currentStage = stage
expectedClose = closedate
prob = probability
weighted = weightedvalue
dealNotes = notes
owner = ownerid
ownerEmail = owneremail
created = createdat
END WITH
SAVE "deals.csv", deal
' Store in bot memory
SET BOT MEMORY "last_deal", dealid
' Log activity
let activity = "Deal created: " + dealname
SAVE "deal_activities.csv", dealid, activity, owneremail, createdat
WITH dealActivity
dealId = dealid
action = "Deal created: " + dealname
user = owneremail
timestamp = createdat
END WITH
' Respond to user
TALK "✅ **Deal Created Successfully!**"
TALK ""
TALK "**Deal Details:**"
TALK "📋 **ID:** " + dealid
TALK "💼 **Name:** " + dealname
TALK "🏢 **Company:** " + companyname
TALK "📧 **Contact:** " + contactemail
TALK "💰 **Value:** $" + FORMAT dealvalue AS "#,##0"
TALK "📊 **Stage:** " + stage + " (" + probability + "% probability)"
TALK "📅 **Expected Close:** " + closedate
TALK "💵 **Weighted Value:** $" + FORMAT weightedvalue AS "#,##0"
TALK ""
TALK "📝 You can update this deal anytime by saying **update deal " + dealid + "**"
SAVE "deal_activities.csv", dealActivity
TALK "Deal created: " + dealname
TALK "ID: " + dealid
TALK "Company: " + companyname
TALK "Value: $" + FORMAT(dealvalue, "#,##0")
TALK "Stage: " + stage + " (" + probability + "% probability)"
TALK "Expected Close: " + closedate
TALK "Weighted Value: $" + FORMAT(weightedvalue, "#,##0")
RETURN dealid

View file

@ -1,10 +1,3 @@
' Sales Pipeline Template - Start Script
' Manages sales deals, stages, and pipeline analytics
' ============================================================================
' SETUP TOOLS - Register available sales pipeline tools
' ============================================================================
ADD TOOL "create-deal"
ADD TOOL "update-stage"
ADD TOOL "list-deals"
@ -12,21 +5,9 @@ ADD TOOL "deal-details"
ADD TOOL "pipeline-report"
ADD TOOL "forecast-revenue"
' ============================================================================
' SETUP KNOWLEDGE BASE
' ============================================================================
USE KB "sales-pipeline.gbkb"
' ============================================================================
' SET CONTEXT FOR AI
' ============================================================================
SET CONTEXT "sales pipeline" AS "You are a sales assistant helping manage the sales pipeline. You can help with: creating new deals, updating deal stages, viewing pipeline status, generating sales forecasts, and analyzing win/loss rates. Always be encouraging and help sales reps close more deals."
' ============================================================================
' SETUP SUGGESTIONS
' ============================================================================
SET CONTEXT "sales pipeline" AS "You are a sales assistant helping manage the sales pipeline. Help with creating new deals, updating deal stages, viewing pipeline status, generating sales forecasts, and analyzing win/loss rates."
CLEAR SUGGESTIONS
@ -36,48 +17,32 @@ ADD SUGGESTION "update" AS "Update a deal stage"
ADD SUGGESTION "forecast" AS "View sales forecast"
ADD SUGGESTION "report" AS "Generate pipeline report"
' ============================================================================
' WELCOME MESSAGE
' ============================================================================
BEGIN TALK
💼 **Sales Pipeline Manager**
**Sales Pipeline Manager**
Welcome! I'm your sales assistant for managing deals and pipeline.
I can help you with:
Create new deals and opportunities
View and manage your pipeline
Update deal stages
Generate sales forecasts
Pipeline analytics and reports
Track win/loss rates
**What I can help you with:**
Create new deals and opportunities
📊 View and manage your pipeline
🔄 Update deal stages (Lead Qualified Proposal Negotiation Closed)
📈 Generate sales forecasts
📋 Pipeline analytics and reports
🏆 Track win/loss rates
Just tell me what you need or select an option below!
Select an option or tell me what you need.
END TALK
' ============================================================================
' SYSTEM PROMPT FOR AI INTERACTIONS
' ============================================================================
BEGIN SYSTEM PROMPT
You are a sales pipeline assistant. Your responsibilities include:
You are a sales pipeline assistant.
1. Helping sales reps create and manage deals
2. Tracking deal progression through pipeline stages
3. Providing pipeline visibility and forecasts
4. Analyzing sales performance metrics
5. Suggesting next best actions for deals
Pipeline stages:
- Lead: Initial contact, not qualified
- Qualified: Budget, authority, need, timeline confirmed
- Proposal: Quote sent
- Negotiation: Active discussions
- Closed Won: Successfully closed
- Closed Lost: Lost or no decision
Pipeline stages in order:
- Lead: Initial contact, not yet qualified
- Qualified: Confirmed budget, authority, need, timeline
- Proposal: Quote or proposal sent
- Negotiation: Active discussions on terms
- Closed Won: Deal successfully closed
- Closed Lost: Deal lost to competitor or no decision
Always encourage sales reps and provide actionable insights.
When updating deals, confirm the changes before saving.
Use deal values in currency format when displaying amounts.
Always encourage sales reps and provide actionable insights.
Confirm changes before saving.
Use currency format for amounts.
END SYSTEM PROMPT

View file

@ -1,217 +1,102 @@
REM General Bots: CALCULATE Keyword - Universal Math Calculator
REM Perform mathematical calculations and conversions
REM Can be used by ANY template that needs math operations
PARAM expression AS STRING LIKE "2 + 2" DESCRIPTION "Mathematical expression to calculate"
PARAM expression AS string LIKE "2 + 2"
DESCRIPTION "Calculate mathematical expressions, conversions, and formulas"
REM Validate input
IF NOT expression OR expression = "" THEN
TALK "❌ Please provide a mathematical expression"
TALK "💡 Examples: '2 + 2', '10 * 5', '100 / 4', 'sqrt(16)', 'sin(45)'"
RETURN NULL
END IF
WITH result
expression = expression
timestamp = NOW()
END WITH
TALK "🧮 Calculating: " + expression
REM Create result object
result = NEW OBJECT
result.expression = expression
result.timestamp = NOW()
REM Try to evaluate the expression
REM This is a simplified calculator - extend as needed
REM Remove spaces
expr = REPLACE(expression, " ", "")
REM Basic operations
IF INSTR(expr, "+") > 0 THEN
parts = SPLIT(expr, "+")
IF UBOUND(parts) = 2 THEN
num1 = VAL(parts[0])
num2 = VAL(parts[1])
answer = num1 + num2
result.answer = answer
result.answer = VAL(parts[0]) + VAL(parts[1])
result.operation = "addition"
END IF
ELSE IF INSTR(expr, "-") > 0 AND LEFT(expr, 1) <> "-" THEN
parts = SPLIT(expr, "-")
IF UBOUND(parts) = 2 THEN
num1 = VAL(parts[0])
num2 = VAL(parts[1])
answer = num1 - num2
result.answer = answer
result.answer = VAL(parts[0]) - VAL(parts[1])
result.operation = "subtraction"
END IF
ELSE IF INSTR(expr, "*") > 0 THEN
parts = SPLIT(expr, "*")
IF UBOUND(parts) = 2 THEN
num1 = VAL(parts[0])
num2 = VAL(parts[1])
answer = num1 * num2
result.answer = answer
result.answer = VAL(parts[0]) * VAL(parts[1])
result.operation = "multiplication"
END IF
ELSE IF INSTR(expr, "/") > 0 THEN
parts = SPLIT(expr, "/")
IF UBOUND(parts) = 2 THEN
num1 = VAL(parts[0])
num2 = VAL(parts[1])
IF num2 <> 0 THEN
answer = num1 / num2
result.answer = answer
IF VAL(parts[1]) <> 0 THEN
result.answer = VAL(parts[0]) / VAL(parts[1])
result.operation = "division"
ELSE
TALK "Error: Division by zero"
TALK "Error: Division by zero"
RETURN NULL
END IF
END IF
ELSE IF INSTR(LCASE(expr), "sqrt") > 0 THEN
REM Square root
start_pos = INSTR(LCASE(expr), "sqrt(") + 5
end_pos = INSTR(start_pos, expr, ")")
IF end_pos > start_pos THEN
num_str = MID(expr, start_pos, end_pos - start_pos)
num = VAL(num_str)
num = VAL(MID(expr, start_pos, end_pos - start_pos))
IF num >= 0 THEN
answer = SQR(num)
result.answer = answer
result.answer = SQR(num)
result.operation = "square root"
ELSE
TALK "Error: Cannot calculate square root of negative number"
TALK "Error: Cannot calculate square root of negative number"
RETURN NULL
END IF
END IF
ELSE IF INSTR(LCASE(expr), "pow") > 0 OR INSTR(expr, "^") > 0 THEN
REM Power operation
IF INSTR(expr, "^") > 0 THEN
parts = SPLIT(expr, "^")
IF UBOUND(parts) = 2 THEN
base = VAL(parts[0])
exponent = VAL(parts[1])
answer = base ^ exponent
result.answer = answer
result.operation = "power"
END IF
ELSE IF INSTR(expr, "^") > 0 THEN
parts = SPLIT(expr, "^")
IF UBOUND(parts) = 2 THEN
result.answer = VAL(parts[0]) ^ VAL(parts[1])
result.operation = "power"
END IF
ELSE IF INSTR(LCASE(expr), "abs") > 0 THEN
REM Absolute value
start_pos = INSTR(LCASE(expr), "abs(") + 4
end_pos = INSTR(start_pos, expr, ")")
IF end_pos > start_pos THEN
num_str = MID(expr, start_pos, end_pos - start_pos)
num = VAL(num_str)
answer = ABS(num)
result.answer = answer
result.answer = ABS(VAL(MID(expr, start_pos, end_pos - start_pos)))
result.operation = "absolute value"
END IF
ELSE IF INSTR(LCASE(expr), "round") > 0 THEN
REM Rounding
start_pos = INSTR(LCASE(expr), "round(") + 6
end_pos = INSTR(start_pos, expr, ")")
IF end_pos > start_pos THEN
num_str = MID(expr, start_pos, end_pos - start_pos)
num = VAL(num_str)
answer = ROUND(num, 0)
result.answer = answer
result.answer = ROUND(VAL(MID(expr, start_pos, end_pos - start_pos)), 0)
result.operation = "rounding"
END IF
ELSE IF INSTR(LCASE(expr), "ceil") > 0 THEN
REM Ceiling
start_pos = INSTR(LCASE(expr), "ceil(") + 5
end_pos = INSTR(start_pos, expr, ")")
IF end_pos > start_pos THEN
num_str = MID(expr, start_pos, end_pos - start_pos)
num = VAL(num_str)
answer = INT(num)
IF num > answer THEN
answer = answer + 1
END IF
result.answer = answer
result.operation = "ceiling"
END IF
ELSE IF INSTR(LCASE(expr), "floor") > 0 THEN
REM Floor
start_pos = INSTR(LCASE(expr), "floor(") + 6
end_pos = INSTR(start_pos, expr, ")")
IF end_pos > start_pos THEN
num_str = MID(expr, start_pos, end_pos - start_pos)
num = VAL(num_str)
answer = INT(num)
result.answer = answer
result.operation = "floor"
END IF
ELSE IF INSTR(LCASE(expr), "percent") > 0 OR INSTR(expr, "%") > 0 THEN
REM Percentage calculation
REM Format: "20% of 100" or "20 percent of 100"
ELSE IF INSTR(expr, "%") > 0 AND INSTR(LCASE(expr), "of") > 0 THEN
expr_lower = LCASE(expr)
IF INSTR(expr_lower, "of") > 0 THEN
REM Extract percentage and base number
of_pos = INSTR(expr_lower, "of")
percent_part = LEFT(expr, of_pos - 1)
percent_part = REPLACE(percent_part, "%", "")
percent_part = REPLACE(LCASE(percent_part), "percent", "")
percent_val = VAL(TRIM(percent_part))
base_part = MID(expr, of_pos + 2)
base_val = VAL(TRIM(base_part))
answer = (percent_val / 100) * base_val
result.answer = answer
result.operation = "percentage"
result.details = percent_val + "% of " + base_val + " = " + answer
END IF
of_pos = INSTR(expr_lower, "of")
percent_part = REPLACE(LEFT(expr, of_pos - 1), "%", "")
percent_val = VAL(TRIM(percent_part))
base_val = VAL(TRIM(MID(expr, of_pos + 2)))
result.answer = (percent_val / 100) * base_val
result.operation = "percentage"
ELSE
REM Try direct evaluation (single number)
answer = VAL(expr)
result.answer = answer
result.answer = VAL(expr)
result.operation = "direct value"
END IF
REM Display result
IF result.answer <> NULL THEN
TALK "✅ Result: " + result.answer
TALK ""
TALK "📊 Details:"
TALK "Expression: " + expression
TALK "Operation: " + result.operation
TALK "Answer: " + result.answer
IF result.details THEN
TALK result.details
END IF
TALK "Result: " + result.answer
RETURN result
ELSE
TALK "❌ Could not calculate expression"
TALK ""
TALK "💡 Supported operations:"
TALK "• Basic: + - * /"
TALK "• Power: 2^3 or pow(2,3)"
TALK "• Square root: sqrt(16)"
TALK "• Absolute: abs(-5)"
TALK "• Rounding: round(3.7), ceil(3.2), floor(3.9)"
TALK "• Percentage: 20% of 100"
TALK ""
TALK "Examples:"
TALK "• 15 + 25"
TALK "• 100 / 4"
TALK "• sqrt(144)"
TALK "• 2^10"
TALK "• 15% of 200"
TALK "Could not calculate expression"
RETURN NULL
END IF

View file

@ -1,69 +1,26 @@
REM General Bots: SEND EMAIL Keyword - Universal Email Sending
REM Free email sending using SMTP or email APIs
REM Can be used by ANY template that needs to send emails
PARAM to_email AS string LIKE "user@example.com"
PARAM subject AS string LIKE "Important Message"
PARAM body AS string LIKE "Hello, this is the email body content."
PARAM from_email AS string LIKE "noreply@pragmatismo.com.br"
PARAM to_email AS EMAIL LIKE "user@example.com" DESCRIPTION "Recipient email address"
PARAM subject AS STRING LIKE "Important Message" DESCRIPTION "Email subject line"
PARAM body AS STRING LIKE "Hello, this is the email content." DESCRIPTION "Email body content"
PARAM from_email AS EMAIL LIKE "noreply@company.com" DESCRIPTION "Sender email address" OPTIONAL
DESCRIPTION "Send an email to any recipient with subject and body"
REM Validate inputs
IF NOT to_email OR to_email = "" THEN
TALK "❌ Recipient email is required"
RETURN NULL
END IF
IF NOT subject OR subject = "" THEN
subject = "Message from General Bots"
END IF
IF NOT body OR body = "" THEN
body = "This is an automated message."
END IF
IF NOT from_email OR from_email = "" THEN
IF NOT from_email THEN
from_email = "noreply@pragmatismo.com.br"
END IF
TALK "📧 Preparing to send email..."
TALK "To: " + to_email
TALK "Subject: " + subject
WITH email_data
to = to_email
from = from_email
subject = subject
body = body
timestamp = NOW()
END WITH
REM Create email object
email_data = NEW OBJECT
email_data.to = to_email
email_data.from = from_email
email_data.subject = subject
email_data.body = body
email_data.timestamp = NOW()
email_data.status = "pending"
SEND EMAIL to_email, subject, body
REM In production, this would integrate with:
REM 1. SMTP server (Gmail, SendGrid, etc.)
REM 2. Email API service (Mailgun, SendGrid, etc.)
REM 3. Microsoft Graph API for Office 365
SAVE "email_log.csv", email_data
REM For now, save to queue for processing
SAVE "email_queue.csv", email_data.timestamp, email_data.from, email_data.to, email_data.subject, email_data.body, email_data.status
TALK "Email sent to " + to_email
TALK "✅ Email queued successfully!"
TALK ""
TALK "📊 Email Details:"
TALK "From: " + from_email
TALK "To: " + to_email
TALK "Subject: " + subject
TALK "Time: " + email_data.timestamp
TALK ""
TALK "⚙️ Status: Queued for delivery"
TALK ""
TALK "💡 Setup Guide:"
TALK "To enable actual email sending, configure SMTP in .gbot settings:"
TALK "1. SMTP_HOST (e.g., smtp.gmail.com)"
TALK "2. SMTP_PORT (e.g., 587)"
TALK "3. SMTP_USER (your email)"
TALK "4. SMTP_PASSWORD (your password or app password)"
REM Return email data
RETURN email_data

View file

@ -1,98 +1,28 @@
REM General Bots: SEND SMS Keyword - Universal SMS Sending
REM Free SMS sending using Twilio, Nexmo, or other SMS APIs
REM Can be used by ANY template that needs to send SMS messages
PARAM phone_number AS string LIKE "+1234567890"
PARAM message AS string LIKE "Hello, this is your SMS message"
PARAM from_number AS string LIKE "+1987654321"
PARAM phone_number AS PHONE LIKE "+1234567890" DESCRIPTION "Phone number with country code"
PARAM message AS STRING LIKE "Hello, this is your message" DESCRIPTION "SMS message content"
PARAM from_number AS PHONE LIKE "+1987654321" DESCRIPTION "Sender phone number" OPTIONAL
DESCRIPTION "Send an SMS message to any phone number"
REM Validate inputs
IF NOT phone_number OR phone_number = "" THEN
TALK "❌ Phone number is required"
TALK "💡 Format: +[country code][number] (e.g., +1234567890)"
RETURN NULL
END IF
IF NOT message OR message = "" THEN
TALK "❌ Message content is required"
RETURN NULL
END IF
REM Validate phone number format (basic check)
IF LEFT(phone_number, 1) <> "+" THEN
TALK "⚠️ Phone number should start with + and country code"
TALK "Example: +1234567890"
END IF
REM Check message length (SMS limit is typically 160 characters)
message_length = LEN(message)
IF message_length > 160 THEN
TALK "⚠️ Warning: Message is " + message_length + " characters"
TALK "SMS messages over 160 characters may be split into multiple messages"
END IF
TALK "📱 Preparing to send SMS..."
TALK "To: " + phone_number
TALK "Message length: " + message_length + " characters"
REM Create SMS object
sms_data = NEW OBJECT
sms_data.to = phone_number
sms_data.from = from_number
sms_data.message = message
sms_data.timestamp = NOW()
sms_data.status = "pending"
sms_data.length = message_length
REM Calculate estimated cost (example: $0.01 per message)
segments = INT((message_length - 1) / 160) + 1
sms_data.segments = segments
sms_data.estimated_cost = segments * 0.01
REM In production, this would integrate with:
REM 1. Twilio SMS API (https://www.twilio.com/docs/sms)
REM 2. Nexmo/Vonage API (https://developer.vonage.com/messaging/sms/overview)
REM 3. AWS SNS (https://aws.amazon.com/sns/)
REM 4. Azure Communication Services
REM 5. MessageBird API
REM Example Twilio integration (requires API key):
REM SET HEADER "Authorization" = "Basic " + BASE64(account_sid + ":" + auth_token)
REM SET HEADER "Content-Type" = "application/x-www-form-urlencoded"
REM twilio_url = "https://api.twilio.com/2010-04-01/Accounts/" + account_sid + "/Messages.json"
REM post_data = "To=" + phone_number + "&From=" + from_number + "&Body=" + message
REM result = POST twilio_url, post_data
REM For now, save to queue for processing
SAVE "sms_queue.csv", sms_data.timestamp, sms_data.from, sms_data.to, sms_data.message, sms_data.segments, sms_data.estimated_cost, sms_data.status
TALK "✅ SMS queued successfully!"
TALK ""
TALK "📊 SMS Details:"
IF from_number AND from_number <> "" THEN
TALK "From: " + from_number
IF message_length > 160 THEN
TALK "Message will be split into " + segments + " segments"
END IF
TALK "To: " + phone_number
TALK "Message: " + LEFT(message, 50) + "..."
TALK "Segments: " + segments
TALK "Estimated Cost: $" + sms_data.estimated_cost
TALK "Time: " + sms_data.timestamp
TALK ""
TALK "⚙️ Status: Queued for delivery"
TALK ""
TALK "💡 Setup Guide to enable SMS:"
TALK "1. Sign up for Twilio (free trial available)"
TALK "2. Get your Account SID and Auth Token"
TALK "3. Get a Twilio phone number"
TALK "4. Configure in .gbot settings:"
TALK " - SMS_PROVIDER (twilio/nexmo/aws)"
TALK " - SMS_ACCOUNT_SID"
TALK " - SMS_AUTH_TOKEN"
TALK " - SMS_FROM_NUMBER"
TALK ""
TALK "📚 Twilio Quick Start: https://www.twilio.com/docs/sms/quickstart"
REM Return SMS data
RETURN sms_data
WITH sms
to = phone_number
from = from_number
body = message
timestamp = NOW()
segmentCount = segments
END WITH
SEND SMS phone_number, message
SAVE "sms_log.csv", sms
TALK "SMS sent to " + phone_number
RETURN sms

View file

@ -1,104 +1,60 @@
REM General Bots: TRANSLATE Keyword - Universal Translation
REM Free translation using LibreTranslate API - No authentication required
REM Can be used by ANY template that needs translation
PARAM text AS STRING LIKE "Hello, how are you?" DESCRIPTION "Text to translate"
PARAM from_lang AS STRING LIKE "en" DESCRIPTION "Source language code (en, es, pt, fr, de, etc)" OPTIONAL
PARAM to_lang AS STRING LIKE "es" DESCRIPTION "Target language code (en, es, pt, fr, de, etc)" OPTIONAL
PARAM text AS string LIKE "Hello, how are you?"
PARAM from_lang AS string LIKE "en"
PARAM to_lang AS string LIKE "es"
DESCRIPTION "Translate text between languages using free API"
DESCRIPTION "Translate text between languages using free translation API"
REM Validate input
IF NOT text OR text = "" THEN
TALK "❌ Please provide text to translate"
RETURN NULL
END IF
REM Set default languages if not provided
IF NOT from_lang OR from_lang = "" THEN
IF NOT from_lang THEN
from_lang = "en"
END IF
IF NOT to_lang OR to_lang = "" THEN
IF NOT to_lang THEN
to_lang = "es"
END IF
TALK "🌐 Translating from " + from_lang + " to " + to_lang + "..."
TALK "Translating from " + from_lang + " to " + to_lang + "..."
REM Try LibreTranslate API (free, open source)
REM Note: Public instance may have rate limits
translate_url = "https://libretranslate.com/translate"
WITH post_data
q = text
source = from_lang
target = to_lang
format = "text"
END WITH
REM Prepare POST data
post_data = NEW OBJECT
post_data.q = text
post_data.source = from_lang
post_data.target = to_lang
post_data.format = "text"
REM Set headers
SET HEADER "Content-Type" = "application/json"
REM Make translation request
translation_result = POST translate_url, post_data
translation_result = POST "https://libretranslate.com/translate", post_data
IF translation_result.translatedText THEN
translated = translation_result.translatedText
WITH result
original = text
translated = translation_result.translatedText
from = from_lang
to = to_lang
END WITH
result = NEW OBJECT
result.original = text
result.translated = translated
result.from = from_lang
result.to = to_lang
TALK "✅ Translation complete!"
TALK ""
TALK "📝 Original (" + from_lang + "):"
TALK text
TALK ""
TALK "✨ Translated (" + to_lang + "):"
TALK translated
TALK "Original (" + from_lang + "): " + text
TALK "Translated (" + to_lang + "): " + result.translated
RETURN result
ELSE
REM Fallback: Try alternative API or show error
TALK "❌ Translation failed. Trying alternative method..."
REM Alternative: Use MyMemory Translation API (free, no key)
mymemory_url = "https://api.mymemory.translated.net/get?q=" + text + "&langpair=" + from_lang + "|" + to_lang
fallback_result = GET mymemory_url
IF fallback_result.responseData.translatedText THEN
translated = fallback_result.responseData.translatedText
WITH result
original = text
translated = fallback_result.responseData.translatedText
from = from_lang
to = to_lang
END WITH
result = NEW OBJECT
result.original = text
result.translated = translated
result.from = from_lang
result.to = to_lang
result.confidence = fallback_result.responseData.match
TALK "✅ Translation complete (alternative API)!"
TALK ""
TALK "📝 Original (" + from_lang + "):"
TALK text
TALK ""
TALK "✨ Translated (" + to_lang + "):"
TALK translated
IF result.confidence THEN
TALK "🎯 Confidence: " + result.confidence
END IF
TALK "Original (" + from_lang + "): " + text
TALK "Translated (" + to_lang + "): " + result.translated
RETURN result
ELSE
TALK "❌ Could not translate text"
TALK ""
TALK "💡 Supported language codes:"
TALK "en = English, es = Spanish, fr = French"
TALK "de = German, it = Italian, pt = Portuguese"
TALK "ru = Russian, ja = Japanese, zh = Chinese"
TALK "ar = Arabic, hi = Hindi, ko = Korean"
TALK "Could not translate text"
RETURN NULL
END IF
END IF

View file

@ -1,16 +1,10 @@
REM General Bots: WEATHER Keyword - Universal Weather Data
REM Free weather API using 7Timer! - No authentication required
REM Can be used by ANY template that needs weather information
PARAM location AS STRING LIKE "New York" DESCRIPTION "City or location to get weather forecast"
PARAM location AS string LIKE "New York"
DESCRIPTION "Get current weather forecast for any city or location"
REM Default coordinates for common cities
lat = 40.7128
lon = -74.0060
REM Parse location to get approximate coordinates
REM In production, use geocoding API
location_lower = LCASE(location)
IF INSTR(location_lower, "new york") > 0 THEN
@ -34,24 +28,12 @@ ELSE IF INSTR(location_lower, "berlin") > 0 THEN
ELSE IF INSTR(location_lower, "madrid") > 0 THEN
lat = 40.4168
lon = -3.7038
ELSE IF INSTR(location_lower, "rome") > 0 THEN
lat = 41.9028
lon = 12.4964
ELSE IF INSTR(location_lower, "moscow") > 0 THEN
lat = 55.7558
lon = 37.6173
ELSE IF INSTR(location_lower, "beijing") > 0 THEN
lat = 39.9042
lon = 116.4074
ELSE IF INSTR(location_lower, "mumbai") > 0 THEN
lat = 19.0760
lon = 72.8777
ELSE IF INSTR(location_lower, "sao paulo") > 0 OR INSTR(location_lower, "são paulo") > 0 THEN
lat = -23.5505
lon = -46.6333
ELSE IF INSTR(location_lower, "mexico city") > 0 THEN
lat = 19.4326
lon = -99.1332
ELSE IF INSTR(location_lower, "rio") > 0 THEN
lat = -22.9068
lon = -43.1729
ELSE IF INSTR(location_lower, "los angeles") > 0 THEN
lat = 34.0522
lon = -118.2437
@ -61,38 +43,27 @@ ELSE IF INSTR(location_lower, "chicago") > 0 THEN
ELSE IF INSTR(location_lower, "toronto") > 0 THEN
lat = 43.6532
lon = -79.3832
ELSE IF INSTR(location_lower, "buenos aires") > 0 THEN
lat = -34.6037
lon = -58.3816
ELSE IF INSTR(location_lower, "cairo") > 0 THEN
lat = 30.0444
lon = 31.2357
ELSE IF INSTR(location_lower, "dubai") > 0 THEN
lat = 25.2048
lon = 55.2708
ELSE IF INSTR(location_lower, "singapore") > 0 THEN
lat = 1.3521
lon = 103.8198
ELSE IF INSTR(location_lower, "mumbai") > 0 THEN
lat = 19.0760
lon = 72.8777
ELSE IF INSTR(location_lower, "beijing") > 0 THEN
lat = 39.9042
lon = 116.4074
END IF
REM Call Open-Meteo API (free, no key required)
weather_url = "https://api.open-meteo.com/v1/forecast?latitude=" + lat + "&longitude=" + lon + "&current_weather=true&hourly=temperature_2m,precipitation_probability,weathercode&timezone=auto"
weather_url = "https://api.open-meteo.com/v1/forecast?latitude=" + lat + "&longitude=" + lon + "&current_weather=true&timezone=auto"
weather_data = GET weather_url
IF weather_data.current_weather THEN
current = weather_data.current_weather
REM Create result object
result = NEW OBJECT
result.location = location
result.temperature = current.temperature
result.windspeed = current.windspeed
result.winddirection = current.winddirection
result.weathercode = current.weathercode
result.time = current.time
REM Interpret weather code
code = current.weathercode
condition = "Clear"
icon = "☀️"
@ -115,27 +86,27 @@ IF weather_data.current_weather THEN
ELSE IF code >= 80 AND code <= 82 THEN
condition = "Rain showers"
icon = "🌦️"
ELSE IF code >= 85 AND code <= 86 THEN
condition = "Snow showers"
icon = "🌨️"
ELSE IF code >= 95 AND code <= 99 THEN
condition = "Thunderstorm"
icon = "⛈️"
END IF
result.condition = condition
result.icon = icon
WITH result
loc = location
temperature = current.temperature
windspeed = current.windspeed
weathercode = code
cond = condition
ico = icon
END WITH
REM Display weather
TALK icon + " Weather for " + location + ":"
TALK "🌡️ Temperature: " + result.temperature + "°C"
TALK "🌤️ Condition: " + condition
TALK "💨 Wind Speed: " + result.windspeed + " km/h"
TALK "⏰ Last Updated: " + result.time
TALK "Temperature: " + current.temperature + "°C"
TALK "Condition: " + condition
TALK "Wind: " + current.windspeed + " km/h"
RETURN result
ELSE
TALK "❌ Could not fetch weather data for: " + location
TALK "💡 Try: 'London', 'New York', 'Tokyo', 'Paris', etc."
TALK "Could not fetch weather for: " + location
RETURN NULL
END IF

View file

@ -1,9 +1,35 @@
PARAM name AS string LIKE "Abreu Silva" DESCRIPTION "Required full name of the individual."
PARAM birthday AS date LIKE "23/09/2001" DESCRIPTION "Required birth date of the individual in DD/MM/YYYY format."
PARAM email AS string LIKE "abreu.silva@example.com" DESCRIPTION "Required email address for contact purposes."
PARAM personalid AS integer LIKE "12345678900" DESCRIPTION "Required Personal ID number of the individual (only numbers)."
PARAM address AS string LIKE "Rua das Flores, 123 - SP" DESCRIPTION "Required full address of the individual."
PARAM name AS STRING LIKE "Abreu Silva" DESCRIPTION "Full name of the student"
PARAM birthday AS DATE LIKE "23/09/2001" DESCRIPTION "Birth date in DD/MM/YYYY format"
PARAM email AS EMAIL LIKE "abreu.silva@example.com" DESCRIPTION "Email address for contact"
PARAM personalid AS STRING LIKE "12345678900" DESCRIPTION "Personal ID number (only numbers)"
PARAM address AS STRING LIKE "Rua das Flores, 123 - SP" DESCRIPTION "Full address"
DESCRIPTION "This is a the enrollment process, called when the user wants to enrol. Once all information is collected, confirm the details and inform them that their enrollment request has been successfully submitted. Provide a polite and professional tone throughout the interaction."
DESCRIPTION "Process student enrollment with validation and confirmation"
SAVE "enrollments.csv", id, name, birthday, email, personalid, address
enrollmentid = "ENR-" + FORMAT(NOW(), "YYYYMMDD") + "-" + FORMAT(RANDOM(1000, 9999))
createdat = FORMAT(NOW(), "YYYY-MM-DD HH:mm:ss")
WITH enrollment
id = enrollmentid
studentName = name
birthDate = birthday
emailAddress = email
personalId = personalid
fullAddress = address
createdAt = createdat
status = "pending"
END WITH
SAVE "enrollments.csv", enrollment
SET BOT MEMORY "last_enrollment", enrollmentid
TALK "Enrollment submitted successfully!"
TALK "Enrollment ID: " + enrollmentid
TALK "Name: " + name
TALK "Email: " + email
TALK "Status: Pending review"
SEND EMAIL email, "Enrollment Confirmation", "Dear " + name + ",\n\nYour enrollment request has been submitted.\n\nEnrollment ID: " + enrollmentid + "\n\nWe will review your application and contact you soon.\n\nBest regards,\nAdmissions Team"
RETURN enrollmentid

View file

@ -1,8 +1,46 @@
ADD TOOL "enrollment"
ADD TOOL "course-info"
ADD TOOL "schedule"
ADD TOOL "grades"
ADD TOOL "tuition"
ADD TOOL "support"
USE KB "edu.gbkb"
CLEAR SUGGESTIONS
ADD SUGGESTION "enroll" AS "Enroll in a course"
ADD SUGGESTION "courses" AS "View available courses"
ADD SUGGESTION "schedule" AS "My class schedule"
ADD SUGGESTION "grades" AS "Check my grades"
ADD SUGGESTION "tuition" AS "Payment information"
ADD SUGGESTION "help" AS "Academic support"
SET CONTEXT "education" AS "You are an educational institution assistant helping with enrollment, courses, schedules, grades, and academic support. Be helpful and guide students through processes clearly."
BEGIN TALK
**Education Assistant**
Welcome! I can help you with:
Course enrollment and registration
Available courses and programs
Class schedules and calendars
Grades and transcripts
Tuition and payment info
Academic support and advising
Select an option or ask me anything.
END TALK
BEGIN SYSTEM PROMPT
Act as an AI assistant for an educational institution.
Your role is to help users with enrollment, provide information
about courses, answer admissions-related questions, and guide
them through the registration process. Ensure a friendly,
professional tone while offering clear, accurate assistance to
reduce administrative workload and enhance the user experience.
You are an AI assistant for an educational institution.
Be friendly and professional.
Provide clear, accurate assistance.
Reduce administrative workload by handling common inquiries.
Help with enrollment and registration.
Provide course information and prerequisites.
Answer admissions questions.
Guide through registration process.
Explain academic policies.
END SYSTEM PROMPT

View file

@ -1,378 +1,343 @@
PARAM action AS STRING
PARAM item_data AS OBJECT
PARAM action AS STRING LIKE "check_stock" DESCRIPTION "Action: receive_inventory, ship_inventory, check_stock, transfer_stock, cycle_count"
PARAM item_data AS OBJECT LIKE "{po_number: 'PO-123'}" DESCRIPTION "Data object with action-specific parameters"
DESCRIPTION "Manage inventory operations - receive, ship, check stock, transfer between warehouses, and cycle counts"
user_id = GET "session.user_id"
warehouse_id = GET "session.warehouse_id"
current_time = FORMAT NOW() AS "YYYY-MM-DD HH:mm:ss"
IF action = "receive_inventory" THEN
po_number = GET "item_data.po_number"
IF po_number = "" THEN
TALK "Enter Purchase Order number:"
po_number = HEAR
END IF
po_number = item_data.po_number
po = FIND "purchase_orders", "po_number = '" + po_number + "'"
IF po = NULL THEN
IF NOT po THEN
TALK "Purchase order not found."
EXIT
RETURN NULL
END IF
IF po.status = "received" THEN
TALK "This PO has already been received."
EXIT
RETURN NULL
END IF
po_lines = FIND "purchase_order_lines", "po_id = '" + po.id + "'"
FOR EACH line IN po_lines DO
FOR EACH line IN po_lines
item = FIND "items", "id = '" + line.item_id + "'"
TALK "Receiving " + item.name + " - Ordered: " + line.quantity_ordered
TALK "Enter quantity received:"
qty_received = HEAR
HEAR qty_received AS INTEGER
stock = FIND "inventory_stock", "item_id = '" + item.id + "' AND warehouse_id = '" + warehouse_id + "'"
IF stock = NULL THEN
stock = CREATE OBJECT
SET stock.id = FORMAT GUID()
SET stock.item_id = item.id
SET stock.warehouse_id = warehouse_id
SET stock.quantity_on_hand = qty_received
SET stock.last_movement_date = current_time
SAVE_FROM_UNSTRUCTURED "inventory_stock", FORMAT stock AS JSON
IF NOT stock THEN
WITH newStock
id = FORMAT(GUID())
item_id = item.id
warehouse_id = warehouse_id
quantity_on_hand = qty_received
last_movement_date = NOW()
END WITH
SAVE "inventory_stock", newStock
ELSE
stock.quantity_on_hand = stock.quantity_on_hand + qty_received
stock.last_movement_date = current_time
SAVE_FROM_UNSTRUCTURED "inventory_stock", FORMAT stock AS JSON
new_qty = stock.quantity_on_hand + qty_received
UPDATE "inventory_stock" SET quantity_on_hand = new_qty, last_movement_date = NOW() WHERE id = stock.id
END IF
transaction = CREATE OBJECT
SET transaction.id = FORMAT GUID()
SET transaction.transaction_type = "receipt"
SET transaction.transaction_number = "REC-" + FORMAT NOW() AS "YYYYMMDD" + "-" + FORMAT RANDOM(1000, 9999)
SET transaction.item_id = item.id
SET transaction.warehouse_id = warehouse_id
SET transaction.quantity = qty_received
SET transaction.unit_cost = line.unit_price
SET transaction.total_cost = qty_received * line.unit_price
SET transaction.reference_type = "purchase_order"
SET transaction.reference_id = po.id
SET transaction.created_by = user_id
SET transaction.created_at = current_time
WITH transaction
id = FORMAT(GUID())
transaction_type = "receipt"
transaction_number = "REC-" + FORMAT(NOW(), "YYYYMMDD") + "-" + FORMAT(RANDOM(1000, 9999))
item_id = item.id
warehouse_id = warehouse_id
quantity = qty_received
unit_cost = line.unit_price
total_cost = qty_received * line.unit_price
reference_type = "purchase_order"
reference_id = po.id
created_by = user_id
created_at = NOW()
END WITH
SAVE_FROM_UNSTRUCTURED "inventory_transactions", FORMAT transaction AS JSON
SAVE "inventory_transactions", transaction
UPDATE "purchase_order_lines" SET quantity_received = line.quantity_received + qty_received WHERE id = line.id
UPDATE "items" SET last_cost = line.unit_price WHERE id = item.id
NEXT
line.quantity_received = line.quantity_received + qty_received
SAVE_FROM_UNSTRUCTURED "purchase_order_lines", FORMAT line AS JSON
UPDATE "purchase_orders" SET status = "received" WHERE id = po.id
item.last_cost = line.unit_price
item.average_cost = ((item.average_cost * stock.quantity_on_hand) + (qty_received * line.unit_price)) / (stock.quantity_on_hand + qty_received)
SAVE_FROM_UNSTRUCTURED "items", FORMAT item AS JSON
END FOR
po.status = "received"
SAVE_FROM_UNSTRUCTURED "purchase_orders", FORMAT po AS JSON
TALK "Purchase order " + po_number + " received successfully."
notification = "PO " + po_number + " received at warehouse " + warehouse_id
SEND MAIL po.buyer_id, "PO Received", notification
TALK "Purchase order " + po_number + " received."
SEND EMAIL po.buyer_id, "PO Received", "PO " + po_number + " received at warehouse " + warehouse_id
RETURN po_number
END IF
IF action = "ship_inventory" THEN
so_number = GET "item_data.so_number"
IF so_number = "" THEN
TALK "Enter Sales Order number:"
so_number = HEAR
END IF
so_number = item_data.so_number
so = FIND "sales_orders", "order_number = '" + so_number + "'"
IF so = NULL THEN
IF NOT so THEN
TALK "Sales order not found."
EXIT
RETURN NULL
END IF
so_lines = FIND "sales_order_lines", "order_id = '" + so.id + "'"
can_ship = true
FOR EACH line IN so_lines DO
FOR EACH line IN so_lines
item = FIND "items", "id = '" + line.item_id + "'"
stock = FIND "inventory_stock", "item_id = '" + item.id + "' AND warehouse_id = '" + warehouse_id + "'"
IF stock = NULL OR stock.quantity_available < line.quantity_ordered THEN
TALK "Insufficient stock for " + item.name + ". Available: " + stock.quantity_available + ", Needed: " + line.quantity_ordered
IF NOT stock OR stock.quantity_available < line.quantity_ordered THEN
TALK "Insufficient stock for " + item.name
can_ship = false
END IF
END FOR
NEXT
IF can_ship = false THEN
IF NOT can_ship THEN
TALK "Cannot ship order due to insufficient inventory."
EXIT
RETURN NULL
END IF
shipment_number = "SHIP-" + FORMAT NOW() AS "YYYYMMDD" + "-" + FORMAT RANDOM(1000, 9999)
shipment_number = "SHIP-" + FORMAT(NOW(), "YYYYMMDD") + "-" + FORMAT(RANDOM(1000, 9999))
FOR EACH line IN so_lines DO
FOR EACH line IN so_lines
item = FIND "items", "id = '" + line.item_id + "'"
stock = FIND "inventory_stock", "item_id = '" + item.id + "' AND warehouse_id = '" + warehouse_id + "'"
stock.quantity_on_hand = stock.quantity_on_hand - line.quantity_ordered
stock.last_movement_date = current_time
SAVE_FROM_UNSTRUCTURED "inventory_stock", FORMAT stock AS JSON
new_qty = stock.quantity_on_hand - line.quantity_ordered
UPDATE "inventory_stock" SET quantity_on_hand = new_qty, last_movement_date = NOW() WHERE id = stock.id
transaction = CREATE OBJECT
SET transaction.id = FORMAT GUID()
SET transaction.transaction_type = "shipment"
SET transaction.transaction_number = shipment_number
SET transaction.item_id = item.id
SET transaction.warehouse_id = warehouse_id
SET transaction.quantity = 0 - line.quantity_ordered
SET transaction.unit_cost = item.average_cost
SET transaction.total_cost = line.quantity_ordered * item.average_cost
SET transaction.reference_type = "sales_order"
SET transaction.reference_id = so.id
SET transaction.created_by = user_id
SET transaction.created_at = current_time
WITH transaction
id = FORMAT(GUID())
transaction_type = "shipment"
transaction_number = shipment_number
item_id = item.id
warehouse_id = warehouse_id
quantity = 0 - line.quantity_ordered
unit_cost = item.average_cost
total_cost = line.quantity_ordered * item.average_cost
reference_type = "sales_order"
reference_id = so.id
created_by = user_id
created_at = NOW()
END WITH
SAVE_FROM_UNSTRUCTURED "inventory_transactions", FORMAT transaction AS JSON
SAVE "inventory_transactions", transaction
UPDATE "sales_order_lines" SET quantity_shipped = line.quantity_ordered, cost_of_goods_sold = transaction.total_cost WHERE id = line.id
NEXT
line.quantity_shipped = line.quantity_ordered
line.cost_of_goods_sold = line.quantity_ordered * item.average_cost
SAVE_FROM_UNSTRUCTURED "sales_order_lines", FORMAT line AS JSON
END FOR
so.status = "shipped"
SAVE_FROM_UNSTRUCTURED "sales_orders", FORMAT so AS JSON
UPDATE "sales_orders" SET status = "shipped" WHERE id = so.id
TALK "Order " + so_number + " shipped. Tracking: " + shipment_number
customer = FIND "customers", "id = '" + so.customer_id + "'"
IF customer != NULL AND customer.email != "" THEN
message = "Your order " + so_number + " has been shipped. Tracking: " + shipment_number
SEND MAIL customer.email, "Order Shipped", message
IF customer AND customer.email THEN
SEND EMAIL customer.email, "Order Shipped", "Your order " + so_number + " has been shipped. Tracking: " + shipment_number
END IF
RETURN shipment_number
END IF
IF action = "check_stock" THEN
item_search = GET "item_data.item_search"
IF item_search = "" THEN
TALK "Enter item name or code:"
item_search = HEAR
END IF
item_search = item_data.item_search
items = FIND "items", "name LIKE '%" + item_search + "%' OR item_code = '" + item_search + "'"
IF items = NULL THEN
IF NOT items THEN
TALK "No items found."
EXIT
RETURN NULL
END IF
FOR EACH item IN items DO
FOR EACH item IN items
TALK "Item: " + item.name + " (" + item.item_code + ")"
stocks = FIND "inventory_stock", "item_id = '" + item.id + "'"
total_on_hand = 0
total_available = 0
total_reserved = 0
FOR EACH stock IN stocks DO
FOR EACH stock IN stocks
warehouse = FIND "warehouses", "id = '" + stock.warehouse_id + "'"
TALK " " + warehouse.name + ": " + stock.quantity_on_hand + " on hand, " + stock.quantity_available + " available"
TALK " " + warehouse.name + ": " + stock.quantity_on_hand + " on hand"
total_on_hand = total_on_hand + stock.quantity_on_hand
total_available = total_available + stock.quantity_available
total_reserved = total_reserved + stock.quantity_reserved
END FOR
NEXT
TALK " TOTAL: " + total_on_hand + " on hand, " + total_available + " available, " + total_reserved + " reserved"
TALK " TOTAL: " + total_on_hand + " on hand, " + total_available + " available"
IF total_available < item.minimum_stock_level THEN
TALK " WARNING: Below minimum stock level (" + item.minimum_stock_level + ")"
IF item.reorder_point > 0 AND total_available <= item.reorder_point THEN
TALK " REORDER NEEDED! Reorder quantity: " + item.reorder_quantity
TALK " REORDER NEEDED! Qty: " + item.reorder_quantity
CREATE_TASK "Reorder " + item.name, "high", user_id
END IF
END IF
END FOR
NEXT
RETURN items
END IF
IF action = "transfer_stock" THEN
TALK "Enter item code:"
item_code = HEAR
HEAR item_code AS STRING
item = FIND "items", "item_code = '" + item_code + "'"
IF item = NULL THEN
IF NOT item THEN
TALK "Item not found."
EXIT
RETURN NULL
END IF
TALK "From warehouse code:"
from_warehouse_code = HEAR
HEAR from_warehouse_code AS STRING
from_warehouse = FIND "warehouses", "code = '" + from_warehouse_code + "'"
IF from_warehouse = NULL THEN
IF NOT from_warehouse THEN
TALK "Source warehouse not found."
EXIT
RETURN NULL
END IF
from_stock = FIND "inventory_stock", "item_id = '" + item.id + "' AND warehouse_id = '" + from_warehouse.id + "'"
IF from_stock = NULL THEN
IF NOT from_stock THEN
TALK "No stock in source warehouse."
EXIT
RETURN NULL
END IF
TALK "Available: " + from_stock.quantity_available
TALK "Transfer quantity:"
transfer_qty = HEAR
HEAR transfer_qty AS INTEGER
IF transfer_qty > from_stock.quantity_available THEN
TALK "Insufficient available quantity."
EXIT
RETURN NULL
END IF
TALK "To warehouse code:"
to_warehouse_code = HEAR
HEAR to_warehouse_code AS STRING
to_warehouse = FIND "warehouses", "code = '" + to_warehouse_code + "'"
IF to_warehouse = NULL THEN
IF NOT to_warehouse THEN
TALK "Destination warehouse not found."
EXIT
RETURN NULL
END IF
transfer_number = "TRAN-" + FORMAT NOW() AS "YYYYMMDD" + "-" + FORMAT RANDOM(1000, 9999)
transfer_number = "TRAN-" + FORMAT(NOW(), "YYYYMMDD") + "-" + FORMAT(RANDOM(1000, 9999))
from_stock.quantity_on_hand = from_stock.quantity_on_hand - transfer_qty
from_stock.last_movement_date = current_time
SAVE_FROM_UNSTRUCTURED "inventory_stock", FORMAT from_stock AS JSON
new_from_qty = from_stock.quantity_on_hand - transfer_qty
UPDATE "inventory_stock" SET quantity_on_hand = new_from_qty, last_movement_date = NOW() WHERE id = from_stock.id
from_transaction = CREATE OBJECT
SET from_transaction.id = FORMAT GUID()
SET from_transaction.transaction_type = "transfer_out"
SET from_transaction.transaction_number = transfer_number
SET from_transaction.item_id = item.id
SET from_transaction.warehouse_id = from_warehouse.id
SET from_transaction.quantity = 0 - transfer_qty
SET from_transaction.unit_cost = item.average_cost
SET from_transaction.created_by = user_id
SET from_transaction.created_at = current_time
WITH from_transaction
id = FORMAT(GUID())
transaction_type = "transfer_out"
transaction_number = transfer_number
item_id = item.id
warehouse_id = from_warehouse.id
quantity = 0 - transfer_qty
unit_cost = item.average_cost
created_by = user_id
created_at = NOW()
END WITH
SAVE_FROM_UNSTRUCTURED "inventory_transactions", FORMAT from_transaction AS JSON
SAVE "inventory_transactions", from_transaction
to_stock = FIND "inventory_stock", "item_id = '" + item.id + "' AND warehouse_id = '" + to_warehouse.id + "'"
IF to_stock = NULL THEN
to_stock = CREATE OBJECT
SET to_stock.id = FORMAT GUID()
SET to_stock.item_id = item.id
SET to_stock.warehouse_id = to_warehouse.id
SET to_stock.quantity_on_hand = transfer_qty
SET to_stock.last_movement_date = current_time
SAVE_FROM_UNSTRUCTURED "inventory_stock", FORMAT to_stock AS JSON
IF NOT to_stock THEN
WITH newToStock
id = FORMAT(GUID())
item_id = item.id
warehouse_id = to_warehouse.id
quantity_on_hand = transfer_qty
last_movement_date = NOW()
END WITH
SAVE "inventory_stock", newToStock
ELSE
to_stock.quantity_on_hand = to_stock.quantity_on_hand + transfer_qty
to_stock.last_movement_date = current_time
SAVE_FROM_UNSTRUCTURED "inventory_stock", FORMAT to_stock AS JSON
new_to_qty = to_stock.quantity_on_hand + transfer_qty
UPDATE "inventory_stock" SET quantity_on_hand = new_to_qty, last_movement_date = NOW() WHERE id = to_stock.id
END IF
to_transaction = CREATE OBJECT
SET to_transaction.id = FORMAT GUID()
SET to_transaction.transaction_type = "transfer_in"
SET to_transaction.transaction_number = transfer_number
SET to_transaction.item_id = item.id
SET to_transaction.warehouse_id = to_warehouse.id
SET to_transaction.quantity = transfer_qty
SET to_transaction.unit_cost = item.average_cost
SET to_transaction.created_by = user_id
SET to_transaction.created_at = current_time
WITH to_transaction
id = FORMAT(GUID())
transaction_type = "transfer_in"
transaction_number = transfer_number
item_id = item.id
warehouse_id = to_warehouse.id
quantity = transfer_qty
unit_cost = item.average_cost
created_by = user_id
created_at = NOW()
END WITH
SAVE_FROM_UNSTRUCTURED "inventory_transactions", FORMAT to_transaction AS JSON
SAVE "inventory_transactions", to_transaction
TALK "Transfer " + transfer_number + " completed: " + transfer_qty + " units from " + from_warehouse.name + " to " + to_warehouse.name
RETURN transfer_number
END IF
IF action = "cycle_count" THEN
TALK "Enter warehouse code:"
warehouse_code = HEAR
HEAR warehouse_code AS STRING
warehouse = FIND "warehouses", "code = '" + warehouse_code + "'"
IF warehouse = NULL THEN
IF NOT warehouse THEN
TALK "Warehouse not found."
EXIT
RETURN NULL
END IF
stocks = FIND "inventory_stock", "warehouse_id = '" + warehouse.id + "'"
count_number = "COUNT-" + FORMAT NOW() AS "YYYYMMDD" + "-" + FORMAT RANDOM(1000, 9999)
count_number = "COUNT-" + FORMAT(NOW(), "YYYYMMDD") + "-" + FORMAT(RANDOM(1000, 9999))
adjustments = 0
FOR EACH stock IN stocks DO
FOR EACH stock IN stocks
item = FIND "items", "id = '" + stock.item_id + "'"
TALK "Item: " + item.name + " (" + item.item_code + ")"
TALK "System quantity: " + stock.quantity_on_hand
TALK "Enter physical count:"
physical_count = HEAR
HEAR physical_count AS INTEGER
IF physical_count != stock.quantity_on_hand THEN
IF physical_count <> stock.quantity_on_hand THEN
variance = physical_count - stock.quantity_on_hand
adjustment = CREATE OBJECT
SET adjustment.id = FORMAT GUID()
SET adjustment.transaction_type = "adjustment"
SET adjustment.transaction_number = count_number
SET adjustment.item_id = item.id
SET adjustment.warehouse_id = warehouse.id
SET adjustment.quantity = variance
SET adjustment.notes = "Cycle count adjustment"
SET adjustment.created_by = user_id
SET adjustment.created_at = current_time
WITH adjustment
id = FORMAT(GUID())
transaction_type = "adjustment"
transaction_number = count_number
item_id = item.id
warehouse_id = warehouse.id
quantity = variance
notes = "Cycle count adjustment"
created_by = user_id
created_at = NOW()
END WITH
SAVE_FROM_UNSTRUCTURED "inventory_transactions", FORMAT adjustment AS JSON
SAVE "inventory_transactions", adjustment
stock.quantity_on_hand = physical_count
stock.last_counted_date = current_time
stock.last_movement_date = current_time
SAVE_FROM_UNSTRUCTURED "inventory_stock", FORMAT stock AS JSON
UPDATE "inventory_stock" SET quantity_on_hand = physical_count, last_counted_date = NOW(), last_movement_date = NOW() WHERE id = stock.id
adjustments = adjustments + 1
TALK " Adjusted by " + variance + " units"
ELSE
stock.last_counted_date = current_time
SAVE_FROM_UNSTRUCTURED "inventory_stock", FORMAT stock AS JSON
UPDATE "inventory_stock" SET last_counted_date = NOW() WHERE id = stock.id
TALK " Count confirmed"
END IF
END FOR
NEXT
TALK "Cycle count " + count_number + " completed with " + adjustments + " adjustments"
IF adjustments > 0 THEN
notification = "Cycle count " + count_number + " completed at " + warehouse.name + " with " + adjustments + " adjustments"
SEND MAIL "inventory-manager@company.com", "Cycle Count Results", notification
SEND EMAIL "inventory-manager@company.com", "Cycle Count Results", "Cycle count " + count_number + " at " + warehouse.name + " with " + adjustments + " adjustments"
END IF
RETURN count_number
END IF
TALK "Unknown action: " + action
RETURN NULL

View file

@ -1,73 +1,44 @@
PARAM name AS STRING LIKE "John Smith" DESCRIPTION "Employee's full name"
PARAM email AS STRING LIKE "john.smith@company.com" DESCRIPTION "Employee's work email address"
PARAM jobtitle AS STRING LIKE "Software Engineer" DESCRIPTION "Job title/position"
PARAM name AS NAME LIKE "John Smith" DESCRIPTION "Employee's full name"
PARAM email AS EMAIL LIKE "john.smith@company.com" DESCRIPTION "Work email address"
PARAM jobtitle AS STRING LIKE "Software Engineer" DESCRIPTION "Job title or position"
PARAM department AS STRING LIKE "Engineering" DESCRIPTION "Department name"
PARAM hiredate AS DATE LIKE "2024-01-15" DESCRIPTION "Employment start date (YYYY-MM-DD)"
PARAM phone AS STRING LIKE "+1-555-123-4567" DESCRIPTION "Optional: Phone number"
PARAM manageremail AS STRING LIKE "manager@company.com" DESCRIPTION "Optional: Manager's email"
PARAM phone AS PHONE LIKE "+1-555-123-4567" DESCRIPTION "Phone number" OPTIONAL
PARAM manageremail AS EMAIL LIKE "manager@company.com" DESCRIPTION "Manager's email address" OPTIONAL
DESCRIPTION "Adds a new employee to the HR system. Collects required information and creates the employee record with a unique employee number."
DESCRIPTION "Add a new employee to the HR system with a unique employee number"
' Validate required fields
IF name = "" THEN
TALK "I need the employee's full name to continue."
name = HEAR
END IF
currentyear = FORMAT(NOW(), "YYYY")
employeenumber = "EMP" + currentyear + "-" + FORMAT(RANDOM(1000, 9999))
IF email = "" THEN
TALK "What is the employee's work email address?"
email = HEAR
END IF
WITH employee
number = employeenumber
fullName = name
emailAddress = email
title = jobtitle
dept = department
startDate = hiredate
phoneNumber = phone
manager = manageremail
END WITH
IF jobtitle = "" THEN
TALK "What is the job title/position?"
jobtitle = HEAR
END IF
SAVE "employees.csv", employee
IF department = "" THEN
TALK "Which department will they be joining?"
department = HEAR
END IF
IF hiredate = "" THEN
TALK "What is their start date? (YYYY-MM-DD format)"
hiredate = HEAR
END IF
' Generate employee number
let currentyear = FORMAT NOW() AS "YYYY"
let employeenumber = "EMP" + currentyear + "-" + FORMAT RANDOM(1000, 9999)
' Save employee record
SAVE "employees.csv", employeenumber, name, email, jobtitle, department, hiredate, phone, manageremail
' Store in bot memory for session
SET BOT MEMORY "last_employee", employeenumber
' Send notifications
let hrnotification = "New employee added: " + name + " (" + employeenumber + ") - " + jobtitle + " in " + department
SEND MAIL "hr@company.com", "New Employee Added", hrnotification
hrnotification = "New employee added: " + name + " (" + employeenumber + ") - " + jobtitle + " in " + department
SEND EMAIL "hr@company.com", "New Employee Added", hrnotification
' Notify manager if provided
IF manageremail != "" THEN
let managernotification = "A new team member has been added:\n\nName: " + name + "\nTitle: " + jobtitle + "\nStart Date: " + hiredate
SEND MAIL manageremail, "New Team Member: " + name, managernotification
IF manageremail THEN
managernotification = "New team member:\n\nName: " + name + "\nTitle: " + jobtitle + "\nStart Date: " + hiredate
SEND EMAIL manageremail, "New Team Member: " + name, managernotification
END IF
' Confirm to user
TALK "✅ **Employee Added Successfully!**"
TALK ""
TALK "**Employee Details:**"
TALK "• **Employee Number:** " + employeenumber
TALK "• **Name:** " + name
TALK "• **Email:** " + email
TALK "• **Job Title:** " + jobtitle
TALK "• **Department:** " + department
TALK "• **Start Date:** " + hiredate
TALK "Employee added: " + name
TALK "Employee Number: " + employeenumber
TALK "Email: " + email
TALK "Title: " + jobtitle
TALK "Department: " + department
TALK "Start Date: " + hiredate
IF manageremail != "" THEN
TALK "• **Manager:** " + manageremail
END IF
TALK ""
TALK "📧 Notifications sent to HR and the assigned manager."
RETURN employeenumber

View file

@ -1,11 +1,3 @@
' Employee Management Template - Start Script
' This script runs when a user first connects to the bot
' Sets up tools, knowledge base, and provides welcome message
' ============================================================================
' SETUP TOOLS - Register available employee management tools
' ============================================================================
ADD TOOL "add-employee"
ADD TOOL "update-employee"
ADD TOOL "search-employee"
@ -13,64 +5,36 @@ ADD TOOL "employee-directory"
ADD TOOL "org-chart"
ADD TOOL "emergency-contacts"
' ============================================================================
' SETUP KNOWLEDGE BASE
' ============================================================================
USE KB "employees.gbkb"
' ============================================================================
' SET CONTEXT FOR AI
' ============================================================================
SET CONTEXT "employee management" AS "You are an HR assistant helping manage employee information. You can help with: adding new employees, updating employee records, searching the employee directory, viewing org charts, and managing emergency contacts. Always maintain confidentiality of employee data."
' ============================================================================
' SETUP SUGGESTIONS
' ============================================================================
SET CONTEXT "employee management" AS "You are an HR assistant helping manage employee information. Help with adding new employees, updating records, searching the directory, viewing org charts, and managing emergency contacts. Maintain confidentiality of employee data."
CLEAR SUGGESTIONS
ADD SUGGESTION "directory" AS "Show me the employee directory"
ADD SUGGESTION "add" AS "Add a new employee"
ADD SUGGESTION "search" AS "Search for an employee"
ADD SUGGESTION "org" AS "Show organization chart"
ADD SUGGESTION "emergency" AS "View emergency contacts"
' ============================================================================
' WELCOME MESSAGE
' ============================================================================
ADD SUGGESTION "directory" AS "Employee directory"
ADD SUGGESTION "add" AS "Add new employee"
ADD SUGGESTION "search" AS "Search employee"
ADD SUGGESTION "org" AS "Organization chart"
ADD SUGGESTION "emergency" AS "Emergency contacts"
BEGIN TALK
👥 **Employee Management System**
**Employee Management System**
Welcome! I'm your HR assistant for managing employee information.
I can help you with:
View employee directory
Add new employees
Search for employees
View organization chart
Manage emergency contacts
Generate employee reports
**What I can help you with:**
📋 View the employee directory
Add new employees to the system
🔍 Search for specific employees
🏢 View the organization chart
🆘 Manage emergency contacts
📊 Generate employee reports
Just tell me what you need or select one of the options below!
Select an option or tell me what you need.
END TALK
' ============================================================================
' SYSTEM PROMPT FOR AI INTERACTIONS
' ============================================================================
BEGIN SYSTEM PROMPT
You are an HR assistant for the Employee Management System. Your responsibilities include:
You are an HR assistant for the Employee Management System.
1. Helping users add, update, and search employee records
2. Maintaining data privacy and confidentiality
3. Providing accurate employee information when requested
4. Assisting with organizational structure queries
5. Managing emergency contact information
Always confirm sensitive operations before executing them.
Never expose sensitive data like salaries or personal IDs without proper authorization.
Use professional and helpful language.
Confirm sensitive operations before executing.
Never expose salaries or personal IDs without authorization.
Use professional and helpful language.
END SYSTEM PROMPT

View file

@ -1,48 +1,23 @@
PARAM description AS STRING LIKE "My computer won't turn on" DESCRIPTION "Description of the IT issue or problem"
PARAM category AS STRING LIKE "hardware" DESCRIPTION "Optional: Category - hardware, software, network, email, account, other"
PARAM priority AS STRING LIKE "medium" DESCRIPTION "Optional: Priority level - critical, high, medium, low"
PARAM category AS STRING LIKE "hardware" DESCRIPTION "Category: hardware, software, network, email, account, other" OPTIONAL
PARAM priority AS STRING LIKE "medium" DESCRIPTION "Priority level: critical, high, medium, low" OPTIONAL
DESCRIPTION "Creates a new IT support ticket. Gathers information about the issue and creates the ticket record."
DESCRIPTION "Create a new IT support ticket with issue details and priority assignment"
' Validate description
IF description = "" THEN
TALK "Please describe the issue you're experiencing."
description = HEAR
END IF
useremail = GET "session.user_email"
username = GET "session.user_name"
IF description = "" THEN
TALK "I need a description to create a ticket."
RETURN
END IF
' Get user info
let useremail = GET "session.user_email"
let username = GET "session.user_name"
IF useremail = "" THEN
TALK "What is your email address?"
useremail = HEAR
END IF
IF username = "" THEN
TALK "And your name?"
username = HEAR
END IF
' Set defaults
IF category = "" THEN
IF NOT category THEN
category = "other"
END IF
IF priority = "" THEN
IF NOT priority THEN
priority = "medium"
END IF
' Generate ticket number
let ticketnumber = "TKT" + FORMAT NOW() AS "YYYYMMDD" + "-" + FORMAT RANDOM(1000, 9999)
ticketnumber = "TKT" + FORMAT(NOW(), "YYYYMMDD") + "-" + FORMAT(RANDOM(1000, 9999))
' Determine SLA hours based on priority
let slahours = 48
slahours = 48
IF priority = "critical" THEN
slahours = 4
ELSE IF priority = "high" THEN
@ -51,56 +26,47 @@ ELSE IF priority = "low" THEN
slahours = 72
END IF
' Save ticket
let status = "new"
let createdat = FORMAT NOW() AS "YYYY-MM-DD HH:mm:ss"
let assignedteam = "general-support"
assignedteam = "general-support"
IF category = "network" THEN
assignedteam = "network-team"
ELSE IF category = "hardware" THEN
assignedteam = "desktop-support"
ELSE IF category = "email" THEN
assignedteam = "messaging-team"
ELSE IF category = "account" THEN
assignedteam = "identity-team"
END IF
SAVE "tickets.csv", ticketnumber, description, category, priority, status, useremail, username, assignedteam, createdat
WITH ticket
number = ticketnumber
desc = description
cat = category
prio = priority
status = "new"
userEmail = useremail
userName = username
team = assignedteam
created = NOW()
END WITH
SAVE "tickets.csv", ticket
' Store in bot memory
SET BOT MEMORY "last_ticket", ticketnumber
' Send confirmation email
let subject = "Ticket Created: " + ticketnumber
let message = "Hello " + username + ",\n\n"
message = message + "Your support ticket has been created.\n\n"
message = message + "Ticket Number: " + ticketnumber + "\n"
message = message + "Category: " + category + "\n"
message = message + "Priority: " + priority + "\n"
message = message + "Expected Response: Within " + slahours + " hours\n\n"
message = message + "Issue:\n" + description + "\n\n"
message = message + "Best regards,\nIT Helpdesk"
subject = "Ticket Created: " + ticketnumber
message = "Hello " + username + ",\n\nYour support ticket has been created.\n\nTicket: " + ticketnumber + "\nCategory: " + category + "\nPriority: " + priority + "\nExpected Response: Within " + slahours + " hours\n\nIssue:\n" + description
SEND MAIL useremail, subject, message
SEND EMAIL useremail, subject, message
' Notify support team
let teamsubject = "[" + priority + "] New Ticket: " + ticketnumber
let teammessage = "New ticket from " + username + " (" + useremail + ")\n\n"
teammessage = teammessage + "Category: " + category + "\n"
teammessage = teammessage + "Priority: " + priority + "\n\n"
teammessage = teammessage + "Description:\n" + description
teamsubject = "[" + priority + "] New Ticket: " + ticketnumber
teammessage = "New ticket from " + username + " (" + useremail + ")\n\nCategory: " + category + "\nPriority: " + priority + "\n\nDescription:\n" + description
SEND MAIL assignedteam + "@company.com", teamsubject, teammessage
SEND EMAIL assignedteam + "@company.com", teamsubject, teammessage
' Respond to user
TALK "✅ **Ticket Created Successfully!**"
TALK ""
TALK "**Ticket Number:** " + ticketnumber
TALK "**Category:** " + category
TALK "**Priority:** " + priority
TALK "**Assigned Team:** " + assignedteam
TALK ""
TALK "**Expected Response:** Within " + slahours + " hours"
TALK ""
TALK "📧 A confirmation email has been sent to " + useremail
TALK ""
TALK "You can check your ticket status anytime by saying **check ticket " + ticketnumber + "**"
TALK "Ticket created: " + ticketnumber
TALK "Category: " + category
TALK "Priority: " + priority
TALK "Assigned Team: " + assignedteam
TALK "Expected Response: Within " + slahours + " hours"
RETURN ticketnumber

View file

@ -1,61 +1,52 @@
' IT Helpdesk Template - Start Script
' Sets up tools, knowledge base, and provides welcome message
' Setup Tools
ADD TOOL "create-ticket"
ADD TOOL "check-ticket-status"
ADD TOOL "my-tickets"
ADD TOOL "update-ticket"
ADD TOOL "close-ticket"
' Setup Knowledge Base
USE KB "helpdesk.gbkb"
' Set Context
SET CONTEXT "it helpdesk" AS "You are an IT helpdesk assistant. You help users create support tickets, check ticket status, and troubleshoot common issues. Always gather necessary information before creating tickets: issue description, urgency level, and affected systems. Be helpful and professional."
SET CONTEXT "it helpdesk" AS "You are an IT helpdesk assistant. Help users create support tickets, check ticket status, and troubleshoot common issues. Gather necessary information before creating tickets: issue description, urgency level, and affected systems."
' Setup Suggestions
CLEAR SUGGESTIONS
ADD SUGGESTION "new ticket" AS "I need to report a problem"
ADD SUGGESTION "status" AS "Check my ticket status"
ADD SUGGESTION "password" AS "I need to reset my password"
ADD SUGGESTION "vpn" AS "I'm having VPN issues"
ADD SUGGESTION "new" AS "Report a problem"
ADD SUGGESTION "status" AS "Check ticket status"
ADD SUGGESTION "password" AS "Reset my password"
ADD SUGGESTION "vpn" AS "VPN issues"
ADD SUGGESTION "email" AS "Email not working"
ADD SUGGESTION "mytickets" AS "View my tickets"
' Welcome Message
BEGIN TALK
🖥 **IT Helpdesk Support**
**IT Helpdesk Support**
Welcome! I'm your IT support assistant, available 24/7 to help you.
I can help you with:
Create a new support ticket
Check ticket status
Password resets
Network and VPN problems
Email issues
Hardware and software support
**How I can help:**
🎫 Create a new support ticket
🔍 Check the status of existing tickets
🔑 Password resets and account issues
🌐 Network and VPN problems
📧 Email and communication issues
💻 Hardware and software support
For urgent issues affecting multiple users, mention "urgent" or "critical".
**Quick tip:** For urgent issues affecting multiple users, please mention "urgent" or "critical" so I can prioritize accordingly.
What can I help you with today?
What can I help you with?
END TALK
BEGIN SYSTEM PROMPT
You are an IT Helpdesk support assistant. Your responsibilities include:
You are an IT Helpdesk support assistant.
1. Ticket Management - Create support tickets with complete information
2. Troubleshooting - Try to resolve common issues using the knowledge base
3. Priority Assessment:
- Critical: System down, security breach, multiple users affected
- High: Single user unable to work, important deadline
- Medium: Issue affecting work but workaround exists
- Low: Minor inconvenience, feature requests
Priority levels:
- Critical: System down, security breach, multiple users affected
- High: Single user unable to work, deadline impact
- Medium: Issue with workaround available
- Low: Minor inconvenience, feature requests
Before creating a ticket, collect:
- Clear description of the issue
- When the issue started
- Error messages if any
- Steps already tried
Before creating a ticket, collect:
- Clear description of the issue
- When the issue started
- Error messages if any
- Steps already tried
Always try to resolve simple issues immediately before creating tickets.
Try to resolve simple issues using the knowledge base before creating tickets.
END SYSTEM PROMPT

View file

@ -1,14 +1,13 @@
TALK "What is the case number?"
HEAR cod
PARAM cod AS STRING LIKE "12345" DESCRIPTION "Case number to load and query"
DESCRIPTION "Load a legal case document by case number for Q&A and analysis"
text = GET "case-" + cod + ".pdf"
IF text THEN
text = "Based on this document, answer the person's questions:\n\n" + text
SET CONTEXT text
SET CONTEXT "Based on this document, answer the person's questions:\n\n" + text
SET ANSWER MODE "document"
TALK "Case ${cod} loaded. You can ask me anything about the case or request a summary in any way you need."
TALK "Case ${cod} loaded. Ask me anything about the case or request a summary."
ELSE
TALK "The case was not found, please try again."
TALK "Case not found. Please check the case number."
END IF

View file

@ -1,9 +1,11 @@
PARAM product AS string LIKE fax DESCRIPTION "Required name of the item you want to inquire about."
DESCRIPTION "Whenever someone ask for a price, call this tool and return the price of the specified product name."
PARAM product AS STRING LIKE "fax" DESCRIPTION "Name of the product to get price for"
DESCRIPTION "Get the price of a product by name from the product catalog"
price = -1
productRecord = FIND "products.csv", "name = ${product}"
IF (productRecord) THEN
price = productRecord.price
IF productRecord THEN
RETURN productRecord.price
ELSE
RETURN -1
END IF
RETURN price

View file

@ -1,9 +1,26 @@
ADD TOOL "get-price"
USE KB "products.gbkb"
CLEAR SUGGESTIONS
ADD SUGGESTION "price" AS "Check product price"
ADD SUGGESTION "products" AS "View products"
ADD SUGGESTION "help" AS "How to use"
BEGIN TALK
**Product Assistant**
I can help you check product prices and information.
Just ask me about any product and I'll look it up for you.
END TALK
BEGIN SYSTEM PROMPT
You are a product assistant with access to internal tools.
There exist some helpful predefined internal tools which can help me by
extending my functionalities or get me helpful information.
These tools **should** be abstracted away from the user.
These tools can be invoked only by me before I respond to a user.
If get price tool return value of -1, says there is no such product.
When get-price returns -1, the product does not exist.
When asked about a price, use the get-price tool and return the result.
Do not expose tool names to users - just act on their requests naturally.
END SYSTEM PROMPT

View file

@ -1,104 +1,110 @@
' Office Bot - Role-based Knowledge Base Routing
' This template demonstrates SWITCH keyword for multi-tenant office environments
ADD TOOL "calendar"
ADD TOOL "tasks"
ADD TOOL "documents"
ADD TOOL "meetings"
ADD TOOL "notes"
CLEAR SUGGESTIONS
ADD SUGGESTION "manager" AS "Manager access"
ADD SUGGESTION "developer" AS "Developer access"
ADD SUGGESTION "customer" AS "Customer support"
ADD SUGGESTION "hr" AS "HR resources"
ADD SUGGESTION "finance" AS "Finance tools"
' Get user role from session or directory
role = GET role
' If no role set, ask the user
IF role = "" THEN
IF NOT role THEN
TALK "Welcome to the Office Assistant!"
TALK "Please select your role:"
ADD SUGGESTION "Manager"
ADD SUGGESTION "Developer"
ADD SUGGESTION "Customer"
ADD SUGGESTION "HR"
ADD SUGGESTION "Finance"
role = HEAR "What is your role?"
HEAR role AS NAME
role = LOWER(role)
SET role, role
END IF
' Route to appropriate knowledge bases based on role
SWITCH role
CASE "manager"
SET CONTEXT "You are an executive assistant helping managers with reports, team management, and strategic decisions."
USE KB "management"
USE KB "reports"
USE KB "team-policies"
TALK "Welcome, Manager! I can help you with reports, team management, and company policies."
TALK "Welcome, Manager! I can help with reports, team management, and policies."
CASE "developer"
SET CONTEXT "You are a technical assistant helping developers with documentation, APIs, and coding best practices."
USE KB "documentation"
USE KB "apis"
USE KB "coding-standards"
TALK "Welcome, Developer! I can help you with technical documentation, APIs, and development guidelines."
TALK "Welcome, Developer! I can help with documentation, APIs, and development guidelines."
CASE "customer"
SET CONTEXT "You are a customer service assistant. Be helpful, friendly, and focus on resolving customer issues."
USE KB "products"
USE KB "support"
USE KB "faq"
TALK "Welcome! I'm here to help you with our products and services. How can I assist you today?"
TALK "Welcome! How can I assist you today?"
CASE "hr"
SET CONTEXT "You are an HR assistant helping with employee matters, policies, and benefits."
USE KB "hr-policies"
USE KB "benefits"
USE KB "onboarding"
TALK "Welcome, HR! I can help you with employee policies, benefits information, and onboarding procedures."
TALK "Welcome, HR! I can help with policies, benefits, and onboarding."
CASE "finance"
SET CONTEXT "You are a finance assistant helping with budgets, expenses, and financial reports."
USE KB "budgets"
USE KB "expenses"
USE KB "financial-reports"
TALK "Welcome, Finance! I can help you with budget queries, expense policies, and financial reporting."
TALK "Welcome, Finance! I can help with budgets, expenses, and reporting."
DEFAULT
SET CONTEXT "You are a general office assistant. Help users with common office tasks and direct them to appropriate resources."
USE KB "general"
USE KB "faq"
TALK "Welcome! I'm your general office assistant. How can I help you today?"
TALK "Welcome! I'm your office assistant. How can I help?"
END SWITCH
' Load common tools available to all roles
USE TOOL "calendar"
USE TOOL "tasks"
USE TOOL "documents"
' Set up suggestions based on role
CLEAR SUGGESTIONS
SWITCH role
CASE "manager"
ADD SUGGESTION "Show team performance"
ADD SUGGESTION "Generate report"
ADD SUGGESTION "Schedule meeting"
ADD SUGGESTION "performance" AS "Team performance"
ADD SUGGESTION "report" AS "Generate report"
ADD SUGGESTION "meeting" AS "Schedule meeting"
CASE "developer"
ADD SUGGESTION "Search documentation"
ADD SUGGESTION "API reference"
ADD SUGGESTION "Code review checklist"
ADD SUGGESTION "docs" AS "Search documentation"
ADD SUGGESTION "api" AS "API reference"
ADD SUGGESTION "review" AS "Code review checklist"
CASE "customer"
ADD SUGGESTION "Track my order"
ADD SUGGESTION "Product information"
ADD SUGGESTION "Contact support"
ADD SUGGESTION "order" AS "Track my order"
ADD SUGGESTION "product" AS "Product information"
ADD SUGGESTION "support" AS "Contact support"
CASE "hr"
ADD SUGGESTION "Employee handbook"
ADD SUGGESTION "Benefits overview"
ADD SUGGESTION "New hire checklist"
ADD SUGGESTION "handbook" AS "Employee handbook"
ADD SUGGESTION "benefits" AS "Benefits overview"
ADD SUGGESTION "onboard" AS "New hire checklist"
CASE "finance"
ADD SUGGESTION "Expense policy"
ADD SUGGESTION "Budget status"
ADD SUGGESTION "Approval workflow"
ADD SUGGESTION "expense" AS "Expense policy"
ADD SUGGESTION "budget" AS "Budget status"
ADD SUGGESTION "approval" AS "Approval workflow"
DEFAULT
ADD SUGGESTION "Help"
ADD SUGGESTION "Contact directory"
ADD SUGGESTION "Office hours"
ADD SUGGESTION "help" AS "Help"
ADD SUGGESTION "directory" AS "Contact directory"
ADD SUGGESTION "hours" AS "Office hours"
END SWITCH
BEGIN SYSTEM PROMPT
You are a role-based office assistant.
Current user role: ${role}
Adapt your responses and suggestions based on the user's role.
Maintain professional and helpful communication.
Route complex requests to appropriate specialists when needed.
END SYSTEM PROMPT

View file

@ -1,6 +1,34 @@
PARAM when
PARAM subject
PARAM when AS STRING LIKE "tomorrow at 9am" DESCRIPTION "When to send the reminder (date/time)"
PARAM subject AS STRING LIKE "Call John about project" DESCRIPTION "What to be reminded about"
PARAM notify AS STRING LIKE "email" DESCRIPTION "Notification method: email, sms, or chat" OPTIONAL
DESCRIPTION Called when someone asks to save a quick meeting.
DESCRIPTION "Create a reminder for a specific date and time with notification"
SAVE "reminders.csv", when, subject
IF NOT notify THEN
notify = "chat"
END IF
reminderid = "REM-" + FORMAT(NOW(), "YYYYMMDD") + "-" + FORMAT(RANDOM(1000, 9999))
useremail = GET "session.user_email"
userphone = GET "session.user_phone"
WITH reminder
id = reminderid
remindAt = when
message = subject
notifyBy = notify
email = useremail
phone = userphone
created = NOW()
status = "pending"
END WITH
SAVE "reminders.csv", reminder
SET BOT MEMORY "last_reminder", reminderid
TALK "Reminder set: " + subject
TALK "When: " + when
TALK "Notification: " + notify
RETURN reminderid

View file

@ -1,3 +1,43 @@
ADD TOOL "add-reminder"
ADD TOOL "list-reminders"
ADD TOOL "delete-reminder"
ADD TOOL "snooze-reminder"
USE KB "reminder.gbkb"
CLEAR SUGGESTIONS
ADD SUGGESTION "add" AS "Add a reminder"
ADD SUGGESTION "list" AS "View my reminders"
ADD SUGGESTION "today" AS "Today's reminders"
ADD SUGGESTION "delete" AS "Delete a reminder"
SET CONTEXT "reminders" AS "You are a reminder assistant helping users manage their tasks and reminders. Help with creating, viewing, and managing reminders. Be helpful and confirm actions."
BEGIN TALK
**Reminder Assistant**
I can help you with:
Create new reminders
View your reminders
Manage and snooze reminders
Delete completed reminders
What would you like to do?
END TALK
BEGIN SYSTEM PROMPT
You are a reminder AI assistant.
You are a reminder AI assistant.
When creating reminders:
- Parse natural language dates (tomorrow, next week, in 2 hours)
- Confirm the reminder details before saving
- Suggest appropriate times if not specified
When listing reminders:
- Show upcoming reminders first
- Highlight overdue items
- Group by date when appropriate
Be concise and helpful.
END SYSTEM PROMPT

View file

@ -1,56 +1,41 @@
PARAM action AS STRING
PARAM lead_data AS OBJECT
PARAM action AS STRING LIKE "capture" DESCRIPTION "Action: capture, qualify, convert, follow_up, nurture"
PARAM lead_data AS OBJECT LIKE "{name: 'John', email: 'john@company.com'}" DESCRIPTION "Lead information object"
DESCRIPTION "Manage leads through the sales pipeline - capture, qualify, convert, follow up, and nurture"
lead_id = GET "session.lead_id"
user_id = GET "session.user_id"
current_time = FORMAT NOW() AS "YYYY-MM-DD HH:mm:ss"
IF action = "capture" THEN
lead_name = GET "lead_data.name"
lead_email = GET "lead_data.email"
lead_phone = GET "lead_data.phone"
lead_company = GET "lead_data.company"
lead_source = GET "lead_data.source"
WITH new_lead
id = FORMAT(GUID())
name = lead_data.name
email = lead_data.email
phone = lead_data.phone
company = lead_data.company
source = lead_data.source
status = "new"
score = 0
created_at = NOW()
assigned_to = user_id
END WITH
IF lead_email = "" THEN
TALK "I need your email to continue."
lead_email = HEAR
END IF
SAVE "leads.csv", new_lead
IF lead_name = "" THEN
TALK "May I have your name?"
lead_name = HEAR
END IF
new_lead = CREATE OBJECT
SET new_lead.id = FORMAT GUID()
SET new_lead.name = lead_name
SET new_lead.email = lead_email
SET new_lead.phone = lead_phone
SET new_lead.company = lead_company
SET new_lead.source = lead_source
SET new_lead.status = "new"
SET new_lead.score = 0
SET new_lead.created_at = current_time
SET new_lead.assigned_to = user_id
SAVE_FROM_UNSTRUCTURED "leads", FORMAT new_lead AS JSON
SET "session.lead_id" = new_lead.id
SET "session.lead_status" = "captured"
REMEMBER "lead_" + new_lead.id = new_lead
TALK "Thank you " + lead_name + "! I've captured your information."
SET "session.lead_id", new_lead.id
SET "session.lead_status", "captured"
SET BOT MEMORY "lead_" + new_lead.id, new_lead.name
TALK "Thank you " + new_lead.name + "! I've captured your information."
RETURN new_lead.id
END IF
IF action = "qualify" THEN
lead = FIND "leads", "id = '" + lead_id + "'"
lead = FIND "leads.csv", "id = '" + lead_id + "'"
IF lead = NULL THEN
IF NOT lead THEN
TALK "No lead found to qualify."
EXIT
RETURN NULL
END IF
score = 0
@ -58,24 +43,24 @@ IF action = "qualify" THEN
TALK "I need to ask you a few questions to better assist you."
TALK "What is your company's annual revenue range?"
TALK "1. Under $1M"
TALK "2. $1M - $10M"
TALK "3. $10M - $50M"
TALK "4. Over $50M"
revenue_answer = HEAR
ADD SUGGESTION "1" AS "Under $1M"
ADD SUGGESTION "2" AS "$1M - $10M"
ADD SUGGESTION "3" AS "$10M - $50M"
ADD SUGGESTION "4" AS "Over $50M"
HEAR revenue_answer AS INTEGER
IF revenue_answer = "4" THEN
IF revenue_answer = 4 THEN
score = score + 30
ELSE IF revenue_answer = "3" THEN
ELSE IF revenue_answer = 3 THEN
score = score + 20
ELSE IF revenue_answer = "2" THEN
ELSE IF revenue_answer = 2 THEN
score = score + 10
ELSE
score = score + 5
END IF
TALK "How many employees does your company have?"
employees = HEAR
HEAR employees AS INTEGER
IF employees > 500 THEN
score = score + 25
@ -88,26 +73,24 @@ IF action = "qualify" THEN
END IF
TALK "What is your timeline for making a decision?"
TALK "1. This month"
TALK "2. This quarter"
TALK "3. This year"
TALK "4. Just researching"
timeline = HEAR
ADD SUGGESTION "1" AS "This month"
ADD SUGGESTION "2" AS "This quarter"
ADD SUGGESTION "3" AS "This year"
ADD SUGGESTION "4" AS "Just researching"
HEAR timeline AS INTEGER
IF timeline = "1" THEN
IF timeline = 1 THEN
score = score + 30
ELSE IF timeline = "2" THEN
ELSE IF timeline = 2 THEN
score = score + 20
ELSE IF timeline = "3" THEN
ELSE IF timeline = 3 THEN
score = score + 10
ELSE
score = score + 0
END IF
TALK "Do you have budget allocated for this?"
has_budget = HEAR
HEAR has_budget AS BOOLEAN
IF has_budget = "yes" OR has_budget = "YES" OR has_budget = "Yes" THEN
IF has_budget THEN
score = score + 25
ELSE
score = score + 5
@ -122,156 +105,151 @@ IF action = "qualify" THEN
lead_status = "cold"
END IF
update_lead = CREATE OBJECT
SET update_lead.score = score
SET update_lead.status = lead_status
SET update_lead.qualified_at = current_time
SET update_lead.revenue_range = revenue_answer
SET update_lead.employees = employees
SET update_lead.timeline = timeline
SET update_lead.has_budget = has_budget
WITH qualification
lead_id = lead_id
score = score
status = lead_status
qualified_at = NOW()
revenue_range = revenue_answer
employees = employees
timeline = timeline
has_budget = has_budget
END WITH
SAVE_FROM_UNSTRUCTURED "leads", FORMAT update_lead AS JSON
SAVE "lead_qualification.csv", qualification
REMEMBER "lead_score_" + lead_id = score
REMEMBER "lead_status_" + lead_id = lead_status
SET BOT MEMORY "lead_score_" + lead_id, score
SET BOT MEMORY "lead_status_" + lead_id, lead_status
IF lead_status = "hot" THEN
TALK "Great! You're a perfect fit for our solution. Let me connect you with a specialist."
notification = "Hot lead alert: " + lead.name + " from " + lead.company + " - Score: " + score
SEND MAIL "sales@company.com", "Hot Lead Alert", notification
SEND EMAIL "sales@company.com", "Hot Lead Alert", "Hot lead alert: " + lead.name + " from " + lead.company + " - Score: " + score
CREATE_TASK "Follow up with hot lead " + lead.name, "high", user_id
ELSE IF lead_status = "warm" THEN
TALK "Thank you! Based on your needs, I'll have someone reach out within 24 hours."
CREATE_TASK "Contact warm lead " + lead.name, "medium", user_id
ELSE
TALK "Thank you for your time. I'll send you some helpful resources via email."
END IF
RETURN score
END IF
IF action = "convert" THEN
lead = FIND "leads", "id = '" + lead_id + "'"
lead = FIND "leads.csv", "id = '" + lead_id + "'"
IF lead = NULL THEN
IF NOT lead THEN
TALK "No lead found to convert."
EXIT
RETURN NULL
END IF
IF lead.status = "unqualified" OR lead.status = "cold" THEN
TALK "This lead needs to be qualified first."
EXIT
RETURN NULL
END IF
account = CREATE OBJECT
SET account.id = FORMAT GUID()
SET account.name = lead.company
SET account.type = "customer"
SET account.owner_id = user_id
SET account.created_from_lead = lead_id
SET account.created_at = current_time
WITH account
id = FORMAT(GUID())
name = lead.company
type = "customer"
owner_id = user_id
created_from_lead = lead_id
created_at = NOW()
END WITH
SAVE_FROM_UNSTRUCTURED "accounts", FORMAT account AS JSON
SAVE "accounts.csv", account
contact = CREATE OBJECT
SET contact.id = FORMAT GUID()
SET contact.account_id = account.id
SET contact.name = lead.name
SET contact.email = lead.email
SET contact.phone = lead.phone
SET contact.primary_contact = true
SET contact.created_from_lead = lead_id
SET contact.created_at = current_time
WITH contact
id = FORMAT(GUID())
account_id = account.id
name = lead.name
email = lead.email
phone = lead.phone
primary_contact = true
created_from_lead = lead_id
created_at = NOW()
END WITH
SAVE_FROM_UNSTRUCTURED "contacts", FORMAT contact AS JSON
SAVE "contacts.csv", contact
opportunity = CREATE OBJECT
SET opportunity.id = FORMAT GUID()
SET opportunity.name = "Opportunity for " + account.name
SET opportunity.account_id = account.id
SET opportunity.contact_id = contact.id
SET opportunity.stage = "qualification"
SET opportunity.probability = 20
SET opportunity.owner_id = user_id
SET opportunity.lead_source = lead.source
SET opportunity.created_at = current_time
WITH opportunity
id = FORMAT(GUID())
name = "Opportunity for " + account.name
account_id = account.id
contact_id = contact.id
stage = "qualification"
probability = 20
owner_id = user_id
lead_source = lead.source
created_at = NOW()
END WITH
SAVE_FROM_UNSTRUCTURED "opportunities", FORMAT opportunity AS JSON
SAVE "opportunities.csv", opportunity
update_lead = CREATE OBJECT
SET update_lead.status = "converted"
SET update_lead.converted_at = current_time
SET update_lead.converted_to_account_id = account.id
UPDATE "leads.csv" SET status = "converted", converted_at = NOW(), converted_to_account_id = account.id WHERE id = lead_id
SAVE_FROM_UNSTRUCTURED "leads", FORMAT update_lead AS JSON
SET BOT MEMORY "account_" + account.id, account.name
SET "session.account_id", account.id
SET "session.contact_id", contact.id
SET "session.opportunity_id", opportunity.id
REMEMBER "account_" + account.id = account
REMEMBER "contact_" + contact.id = contact
REMEMBER "opportunity_" + opportunity.id = opportunity
SET "session.account_id" = account.id
SET "session.contact_id" = contact.id
SET "session.opportunity_id" = opportunity.id
TALK "Successfully converted lead to account: " + account.name
notification = "Lead converted: " + lead.name + " to account " + account.name
SEND MAIL user_id, "Lead Conversion", notification
TALK "Lead converted to account: " + account.name
SEND EMAIL user_id, "Lead Conversion", "Lead converted: " + lead.name + " to account " + account.name
CREATE_TASK "Initial meeting with " + contact.name, "high", user_id
RETURN account.id
END IF
IF action = "follow_up" THEN
lead = FIND "leads", "id = '" + lead_id + "'"
lead = FIND "leads.csv", "id = '" + lead_id + "'"
IF lead = NULL THEN
IF NOT lead THEN
TALK "No lead found."
EXIT
RETURN NULL
END IF
last_contact = GET "lead_last_contact_" + lead_id
last_contact = GET BOT MEMORY "lead_last_contact_" + lead_id
days_since = 0
IF last_contact != "" THEN
days_since = DAYS_BETWEEN(last_contact, current_time)
IF last_contact THEN
days_since = DATEDIFF(last_contact, NOW(), "day")
END IF
IF days_since > 7 OR last_contact = "" THEN
IF days_since > 7 OR NOT last_contact THEN
subject = "Following up on your inquiry"
message = "Hi " + lead.name + ",\n\nI wanted to follow up on your recent inquiry about our services."
SEND MAIL lead.email, subject, message
SEND EMAIL lead.email, subject, message
activity = CREATE OBJECT
SET activity.id = FORMAT GUID()
SET activity.type = "email"
SET activity.subject = subject
SET activity.lead_id = lead_id
SET activity.created_at = current_time
WITH activity
id = FORMAT(GUID())
type = "email"
subject = subject
lead_id = lead_id
created_at = NOW()
END WITH
SAVE_FROM_UNSTRUCTURED "activities", FORMAT activity AS JSON
SAVE "activities.csv", activity
REMEMBER "lead_last_contact_" + lead_id = current_time
SET BOT MEMORY "lead_last_contact_" + lead_id, NOW()
TALK "Follow-up email sent to " + lead.name
RETURN "sent"
ELSE
TALK "Lead was contacted " + days_since + " days ago. Too soon for follow-up."
RETURN "skipped"
END IF
END IF
IF action = "nurture" THEN
leads = FIND "leads", "status = 'warm' OR status = 'cold'"
leads = FIND "leads.csv", "status = 'warm' OR status = 'cold'"
FOR EACH lead IN leads DO
days_old = DAYS_BETWEEN(lead.created_at, current_time)
count = 0
FOR EACH lead IN leads
days_old = DATEDIFF(lead.created_at, NOW(), "day")
content = NULL
IF days_old = 3 THEN
content = "5 Tips to Improve Your Business"
ELSE IF days_old = 7 THEN
@ -280,14 +258,18 @@ IF action = "nurture" THEN
content = "Free Consultation Offer"
ELSE IF days_old = 30 THEN
content = "Special Limited Time Offer"
ELSE
CONTINUE
END IF
SEND MAIL lead.email, content, "Nurture content for day " + days_old
IF content THEN
SEND EMAIL lead.email, content, "Nurture content for day " + days_old
SET BOT MEMORY "lead_nurture_" + lead.id + "_day_" + days_old, "sent"
count = count + 1
END IF
NEXT
REMEMBER "lead_nurture_" + lead.id + "_day_" + days_old = "sent"
END FOR
TALK "Nurture campaign processed"
TALK "Nurture campaign processed: " + count + " emails sent"
RETURN count
END IF
TALK "Unknown action: " + action
RETURN NULL

View file

@ -1,25 +1,54 @@
PARAM to AS STRING
PARAM template AS STRING
PARAM opportunity AS STRING
PARAM to AS EMAIL LIKE "client@company.com" DESCRIPTION "Email address to send proposal to"
PARAM template AS STRING LIKE "proposal-template.docx" DESCRIPTION "Proposal template file to use"
PARAM opportunity AS STRING LIKE "OPP-12345" DESCRIPTION "Opportunity ID to link proposal to"
DESCRIPTION "Generate and send a proposal document based on opportunity and conversation history"
company = QUERY "SELECT Company FROM Opportunities WHERE Id = ${opportunity}"
IF NOT company THEN
TALK "Could not find opportunity. Please provide a valid opportunity ID."
RETURN NULL
END IF
doc = FILL template
' Generate email subject and content based on conversation history
subject = REWRITE "Based on this ${history}, generate a subject for a proposal email to ${company}"
contents = REWRITE "Based on this ${history}, and ${subject}, generate the e-mail body for ${to}, signed by ${user}, including key points from our proposal"
contents = REWRITE "Based on this ${history}, and ${subject}, generate the email body for ${to}, signed by ${user}, including key points from our proposal"
' Add proposal to CRM
CALL "/files/upload", ".gbdrive/Proposals/${company}-proposal.docx", doc
CALL "/files/permissions", ".gbdrive/Proposals/${company}-proposal.docx", "sales-team", "edit"
proposalpath = ".gbdrive/Proposals/${company}-proposal.docx"
' Record activity in CRM
CALL "/crm/activities/create", opportunity, "email_sent", {
"subject": subject,
"description": "Proposal sent to " + company,
"date": NOW()
}
CALL "/files/upload", proposalpath, doc
CALL "/files/permissions", proposalpath, "sales-team", "edit"
WITH activity
opportunityId = opportunity
type = "email_sent"
subject = subject
description = "Proposal sent to " + company
date = NOW()
END WITH
CALL "/crm/activities/create", activity
' Send the email
CALL "/comm/email/send", to, subject, contents, doc
WITH proposalLog
timestamp = NOW()
opp = opportunity
companyName = company
recipient = to
templateUsed = template
status = "sent"
END WITH
SAVE "proposal_log.csv", proposalLog
SET BOT MEMORY "last_proposal", opportunity
TALK "Proposal sent to " + to
TALK "Company: " + company
TALK "Template: " + template
TALK "Opportunity: " + opportunity
RETURN opportunity

View file

@ -1,40 +1,54 @@
TALK "For favor, digite a mensagem que deseja enviar:"
HEAR message
PARAM message AS STRING LIKE "Olá {name}, confira nossas novidades!" DESCRIPTION "Message to broadcast, supports {name} and {telefone} variables"
PARAM template_file AS FILE LIKE "header.jpg" DESCRIPTION "Header image file for the template"
PARAM list_file AS FILE LIKE "contacts.xlsx" DESCRIPTION "File with contacts (must have telefone column)"
PARAM filter AS STRING LIKE "Perfil=VIP" DESCRIPTION "Filter condition for contact list" OPTIONAL
TALK "Analizando template ... (antes de mandar para a META)"
report = LLM "Esta mensagem vai ser aprovada pelo WhatsApp META como Template? Tem recomendação? Se estiver OK, responda o texto: OK. Do contrário, avalie o que deve ser feito."
DESCRIPTION "Send marketing broadcast message to a filtered contact list via WhatsApp template"
report = LLM "Esta mensagem será aprovada pelo WhatsApp META como Template? Responda OK se sim, ou explique o problema: " + message
IF report <> "OK" THEN
TALK "A mensagem não será aprovada pela Meta. " + report
TALK "Atenção: " + report
END IF
TALK "Envie agora o arquivo de imagem de cabefalho:"
HEAR plan AS FILE
IF filter THEN
list = FIND list_file, filter
ELSE
list = FIND list_file
END IF
TALK "É para um arquivo ou todos?"
HEAR in AS FILE
IF UBOUND(list) = 0 THEN
TALK "Nenhum contato encontrado."
RETURN 0
END IF
PUBLISH
IF in.isValid THEN
list = FIND in.filename, "Perfil=" + grupos
ELSE
list = GET "broadcast"
END IF
SET MAX LINES 2020
index = 1
sent = 0
DO WHILE index < UBOUND(list)
row = list[index]
SEND TEMPLATE TO row.telefone. filename
SEND TEMPLATE TO row.telefone, template_file
WAIT 0.1
index = index + 1
WITH logEntry
timestamp = NOW()
phone = row.telefone
name = row.name
status = "sent"
END WITH
SAVE "broadcast_log.csv", logEntry
sent = sent + 1
index = index + 1
LOOP
TALK "OK, o envio foi realizado. Para saber mais, digite /report."
TALK "Broadcast enviado para " + sent + " contatos."
RETURN sent

View file

@ -1,12 +1,62 @@
PARAM NomeDoCliente AS STRING LIKE Nome do cliente finalizando venda.
PARAM pedidos AS OBJECT LIKE O JSON de pedidos montado com base no que foi informado pelo cliente.
PARAM customer_name AS NAME LIKE "João Silva" DESCRIPTION "Customer name for the order"
PARAM items AS OBJECT LIKE "[{id: 1, qty: 2}]" DESCRIPTION "JSON array of items with product id and quantity"
DESCRIPTION Chamada quando a venda é finalizada. Recebendo o JSON dos produtos como jsonProdutos selecionados pelo cliente no carrinho de compras e o nome do cliente. Se a lista de produtos da venda estiver vazio, ela não pode ser finalizada. Nunca referencie diretamente
esta função em si, apenas atue sua funcionalidade de modo oculto.
DESCRIPTION "Complete checkout and finalize the sale with customer and cart items"
DEBUG NomeDoCliente
DEBUG pedidos
IF UBOUND(items) = 0 THEN
TALK "Your cart is empty. Please add items before checkout."
RETURN NULL
END IF
SAVE "maria.Pedidos", nomeDocliente, jsonProdutos.valor
orderid = "ORD-" + FORMAT(NOW(), "YYYYMMDD") + "-" + FORMAT(RANDOM(1000, 9999))
RETURN "OK"
total = 0
orderitems = []
FOR EACH item IN items
product = FIND "products.csv", "id = ${item.id}"
IF product THEN
subtotal = product.price * item.qty
total = total + subtotal
WITH orderitem
product_id = item.id
name = product.name
qty = item.qty
price = product.price
subtotal = subtotal
END WITH
orderitems[UBOUND(orderitems)] = orderitem
END IF
NEXT
IF total = 0 THEN
TALK "No valid products found in cart."
RETURN NULL
END IF
WITH order
id = orderid
customer = customer_name
totalValue = total
status = "pending"
created = NOW()
END WITH
SAVE "orders.csv", order
SAVE "order_items.csv", orderid, TOJSON(orderitems)
SET BOT MEMORY "last_order", orderid
TALK "Order confirmed: " + orderid
TALK "Customer: " + customer_name
FOR EACH orderitem IN orderitems
TALK "- " + orderitem.name + " x" + orderitem.qty + " = $" + FORMAT(orderitem.subtotal, "#,##0.00")
NEXT
TALK "Total: $" + FORMAT(total, "#,##0.00")
RETURN orderid

View file

@ -1,5 +1,44 @@
ADD TOOL "checkout"
ADD TOOL "search-product"
ADD TOOL "add-to-cart"
ADD TOOL "view-cart"
ADD TOOL "track-order"
ADD TOOL "product-details"
data = FIND "products.csv"
CLEAR SUGGESTIONS
ADD SUGGESTION "products" AS "View products"
ADD SUGGESTION "cart" AS "View my cart"
ADD SUGGESTION "checkout" AS "Checkout"
ADD SUGGESTION "orders" AS "Track my order"
ADD SUGGESTION "help" AS "Shopping help"
SET CONTEXT "store" AS "You are a virtual store sales assistant. Help customers browse products, add items to cart, and complete purchases. Be friendly and helpful. Available products: ${TOJSON(data)}"
BEGIN TALK
**Virtual Store**
Welcome! I can help you with:
Browse our product catalog
Add items to your cart
Complete your purchase
Track your orders
Select an option or tell me what you're looking for.
END TALK
BEGIN SYSTEM PROMPT
Engage users effectively as if you're a sales assistant in the virtual store. Begin by welcoming them warmly and encouraging them to explore our range of products. Provide clear instructions on how to inquire about specific items or browse categories. Ensure the tone is friendly, helpful, and inviting to encourage interaction. Use prompts to guide users through the purchasing process and offer assistance whenever needed. Offer them this products: ${ TOJSON (data) }
You are a friendly sales assistant in our virtual store.
Welcome customers warmly.
Help them find products.
Provide clear product information.
Guide through purchase process.
Offer assistance when needed.
Product catalog is available in context.
Suggest related products when appropriate.
Confirm items before adding to cart.
END SYSTEM PROMPT

View file

@ -1,15 +1,50 @@
ADD TOOL "query-data"
ADD TOOL "create-chart"
ADD TOOL "export-data"
ADD TOOL "notify-latest-orders"
SET ANSWER MODE "sql"
CLEAR SUGGESTIONS
ADD SUGGESTION "products" AS "Top products chart"
ADD SUGGESTION "sales" AS "Sales across years"
ADD SUGGESTION "orders" AS "Latest orders"
ADD SUGGESTION "chart" AS "Create a chart"
ADD SUGGESTION "export" AS "Export data"
SET CONTEXT "talk-to-data" AS "You are a data analyst assistant helping users query and visualize their data. Convert natural language questions into SQL queries and generate charts. Be helpful and suggest visualizations."
BEGIN TALK
**Talk To Data**
General Bots Talk To Data
I can help you analyze your data with natural language queries.
Examples:
- Show me top products in a rainbow collored pie chart.
- Sales across years.
**Examples:**
Show me top products in a rainbow colored pie chart
Sales across years
Latest orders this month
Compare revenue by region
Lets get started!
Just ask me anything about your data.
END TALK
BEGIN SYSTEM PROMPT
You are a data analysis assistant that converts natural language to SQL queries.
Chart types:
- timeseries: For data over time
- bar: For comparisons
- pie/donut: For proportions
- line: For trends
When users ask about data:
1. Understand the intent
2. Generate appropriate SQL
3. Suggest relevant visualizations
4. Offer to export if needed
Always use LOWER() for text comparisons.
Use LIKE with %% for partial matches.
Return clear, actionable insights.
END SYSTEM PROMPT

View file

@ -0,0 +1,93 @@
REM Vector Database Statistics Dialog
REM Provides knowledge base statistics and management for administrators
REM Can be used in admin bots or regular .gbdialog files
DESCRIPTION "Knowledge base statistics and vector database management"
REM Get overall KB statistics
stats = KB STATISTICS
statsObj = JSON PARSE stats
TALK "📊 **Knowledge Base Statistics**"
TALK ""
TALK "**Collections:** " + statsObj.total_collections
TALK "**Total Documents:** " + FORMAT(statsObj.total_documents, "#,##0")
TALK "**Total Vectors:** " + FORMAT(statsObj.total_vectors, "#,##0")
TALK "**Disk Usage:** " + FORMAT(statsObj.total_disk_size_mb, "#,##0.00") + " MB"
TALK "**RAM Usage:** " + FORMAT(statsObj.total_ram_size_mb, "#,##0.00") + " MB"
TALK ""
TALK "📅 **Recent Activity**"
TALK "Documents added last 7 days: " + FORMAT(statsObj.documents_added_last_week, "#,##0")
TALK "Documents added last 30 days: " + FORMAT(statsObj.documents_added_last_month, "#,##0")
REM Show collection details
ADD SUGGESTION "View Collections"
ADD SUGGESTION "Check Storage"
ADD SUGGESTION "Recent Documents"
ADD SUGGESTION "Exit"
HEAR choice AS MENU "View Collections", "Check Storage", "Recent Documents", "Exit"
SELECT CASE choice
CASE "View Collections"
collections = KB LIST COLLECTIONS
IF LEN(collections) = 0 THEN
TALK "No collections found for this bot."
ELSE
TALK "📁 **Your Collections:**"
TALK ""
FOR EACH collection IN collections
collectionStats = KB COLLECTION STATS collection
collObj = JSON PARSE collectionStats
TALK "**" + collObj.name + "**"
TALK " • Documents: " + FORMAT(collObj.points_count, "#,##0")
TALK " • Vectors: " + FORMAT(collObj.vectors_count, "#,##0")
TALK " • Status: " + collObj.status
TALK " • Disk: " + FORMAT(collObj.disk_data_size / 1048576, "#,##0.00") + " MB"
TALK ""
NEXT
END IF
CASE "Check Storage"
storageSize = KB STORAGE SIZE
documentsCount = KB DOCUMENTS COUNT
TALK "💾 **Storage Overview**"
TALK ""
TALK "Total storage used: " + FORMAT(storageSize, "#,##0.00") + " MB"
TALK "Total documents indexed: " + FORMAT(documentsCount, "#,##0")
IF documentsCount > 0 THEN
avgSize = storageSize / documentsCount
TALK "Average per document: " + FORMAT(avgSize * 1024, "#,##0.00") + " KB"
END IF
CASE "Recent Documents"
lastWeek = KB DOCUMENTS ADDED SINCE 7
lastMonth = KB DOCUMENTS ADDED SINCE 30
lastDay = KB DOCUMENTS ADDED SINCE 1
TALK "📈 **Document Activity**"
TALK ""
TALK "Added in last 24 hours: " + FORMAT(lastDay, "#,##0")
TALK "Added in last 7 days: " + FORMAT(lastWeek, "#,##0")
TALK "Added in last 30 days: " + FORMAT(lastMonth, "#,##0")
IF lastWeek > 0 THEN
dailyAvg = lastWeek / 7
TALK ""
TALK "Daily average (7 days): " + FORMAT(dailyAvg, "#,##0.0") + " documents"
END IF
CASE "Exit"
TALK "Thank you for using KB Statistics. Goodbye!"
END SELECT
REM Store statistics in bot memory for dashboard
SET BOT MEMORY "kb_last_check", NOW()
SET BOT MEMORY "kb_total_docs", statsObj.total_documents
SET BOT MEMORY "kb_storage_mb", statsObj.total_disk_size_mb

View file

@ -1 +1,20 @@
SEND TEMPLATE TO "122233333333", "newsletter-zap.txt"
PARAM phone AS PHONE LIKE "122233333333" DESCRIPTION "WhatsApp phone number with country code"
PARAM template AS STRING LIKE "newsletter-zap.txt" DESCRIPTION "Template file name to send"
PARAM variables AS OBJECT LIKE "{name: 'John'}" DESCRIPTION "Template variables for personalization" OPTIONAL
DESCRIPTION "Send a WhatsApp template message to a phone number"
SEND TEMPLATE TO phone, template, variables
WITH log
timestamp = NOW()
phoneNumber = phone
templateFile = template
status = "sent"
END WITH
SAVE "whatsapp_log.csv", log
TALK "WhatsApp message sent to " + phone
RETURN phone