Introduces IMAGE, VIDEO, AUDIO, and SEE keywords for BASIC scripts that
connect to the botmodels service for AI-powered media generation and
vision/captioning capabilities.
- Add BotModelsClient for HTTP communication with botmodels service
- Implement BASIC keywords: IMAGE, VIDEO, AUDIO (generation), SEE
(captioning)
- Support configuration via config.csv for models
Changed incorrect references to .vbs files to .bas and corrected
USE_WEBSITE keyword naming. Also added missing fields to API response
structure and clarified that start.bas is optional for bots.
Include model parameter in LLM provider calls across automation, bot, and keyword modules to ensure correct model selection based on configuration. This improves flexibility and consistency in LLM usage.
- Renamed `execute_compact_prompt` to `compact_prompt_for_bots` and simplified logic
- Removed redundant comments and empty lines in test files
- Consolidated prompt compaction threshold handling
- Cleaned up UI logging implementation by removing unnecessary whitespace
- Improved code organization in ui_tree module
The changes focus on code quality improvements, removing clutter, and making the prompt compaction logic more straightforward. Test files were cleaned up to be more concise.
- Update prompt formatting in BotOrchestrator to use clearer labels (SYSTEM/CONTEXT) with emphasis markers
- Remove unused token_ratio field from SystemMetrics struct
- Increase default context size (2048->4096) and prediction length (512->1024) in config
- Clean up metrics calculation by removing redundant token ratio computation
The changes improve readability of system prompts and simplify metrics collection while increasing default model capacity.
- Simplified auth module by removing unused imports and code
- Cleaned up shared models by removing unused structs (Organization, User, Bot, etc.)
- Updated add-req.sh to comment out unused directories
- Modified LLM fallback strategy in README with additional notes about model behaviors
The changes focus on removing unused code and improving documentation while maintaining existing functionality. The auth module was significantly reduced by removing redundant code, and similar cleanup was applied to shared models. The build script was adjusted to reflect currently used directories.
Added interaction count tracking for sessions with Redis or in-memory fallback. Implemented conversation history replacement functionality to compact and update message history. The changes include:
- New AtomicUsize counter in SessionManager for interaction tracking
- increment_and_get_interaction_count method with Redis support
- replace_conversation_history to update and compact message history
- Maintains existing functionality while adding new features
- Reformatted `update_session_context` call for better readability.
- Moved context‑change (type 4) handling to a later stage in the processing pipeline and removed early return, ensuring proper flow.
- Adjusted deduplication logic formatting and clarified condition.
- Restored saving of user messages after context handling, preserving message history.
- Added detailed logging of the LLM prompt for debugging.
- Simplified JSON extraction for `message_type` and applied minor whitespace clean‑ups.
- Overall code refactor improves maintainability and corrects context‑change handling behavior.
- Deduplicate consecutive messages with same role in conversation history
- Add n_predict configuration option for LLM server
- Prevent duplicate message storage in session manager
- Update announcement schedule timing from 37 to 55 minutes
- Add default n_predict value in default bot config
- Refactor cron matching to use individual variables for each time component with additional debug logging
- Replace SETEX with atomic SET NX EX for job locking in Redis
- Add better error handling and logging for job execution tracking
- Skip execution if Redis is unavailable or job is already held
- Add verbose flag to LLM server startup command for better logging
- Added duplicate filtering for suggestions in both backend (Rust) and frontend (JavaScript)
- Changed announcement summary schedule from every 15 minutes to hourly at :37
- Simplified LLM prompt for document summarization
- Updated LLM server configuration with reduced GPU layers and increased context size
- Removed memory mapping and lock settings for LLM server
- Improved HTML formatting and added missing newline at EOF
- Added `trace!` logging in `bot_memory.rs` to record retrieved memory values for easier debugging.
- Refactored `BotOrchestrator` in `bot/mod.rs`:
- Removed duplicate session save block and consolidated message persistence.
- Replaced low‑level LLM streaming with a structured `UserMessage` and `stream_response` workflow, improving error handling and readability.
- Updated configuration loading in `config/mod.rs`:
- Imported `get_default_bot` and enhanced `get_config` to fall back to the default bot configuration when the primary query fails.
- Established a fresh DB connection for the fallback path to avoid borrowing issues.
Add logic to detect changes in llm-related configuration properties when syncing gbot configs. If any llm- keys differ from stored values, automatically restart LLaMA servers to apply updates. This ensures model servers stay in sync with configuration changes without unnecessary restarts.
Updated `ConfigManager::get_config` to use Diesel query builder instead of raw SQL for improved safety and maintainability. Adjusted parameter naming and integrated schema references. Also refactored `ensure_llama_servers_running` to fetch configuration from the database using `AppState` and `ConfigManager`. Removed unused imports in bootstrap module.
- Split AIConfig into separate LLMConfig and embedding config structs
- Update create_site.rs to use config.llm instead of config.ai
- Improve config loading comments in bootstrap manager
- Add new LLM-related environment variables with defaults
- Maintain backward compatibility with existing config loading
- Clean up unused AIConfig struct and related code
The change better organizes the AI-related configuration by separating LLM and embedding configurations, making the code more maintainable and flexible for future AI service integrations.
- Replace direct database connection establishment with shared `establish_pg_connection` utility
- Add "llm" to required components list in bootstrap manager
- Lower default RUST_LOG level from debug to info in VSCode config
- Clean up imports and connection error messages
- Remove hardcoded database URL strings in favor of centralized connection handling
The changes improve code maintainability by centralizing database connection logic and adding support for the new LLM component in the bootstrap process.
Eliminates the `s3_bucket` field from `AppConfig` and deletes related initialization code in `main.rs`. This simplifies configuration management since S3 bucket handling is no longer required or used in the application.
- Introduced AWS SDK dependencies for S3 and CSV handling.
- Implemented logic to check and update the default bot configuration in S3 after component installation.
- Added a new configuration CSV template for bot settings.
- Refactored package manager to register cache component with updated download URL and binary name.
- Updated README and Cargo files to reflect new dependencies and configuration options.