Added parse_messages method to handle structured prompt input for OpenAI API. The method converts human/bot/compact prefixes to appropriate OpenAI roles (user/assistant/system) and properly formats multi-line messages. This enables more complex conversation structures in prompts while maintaining compatibility with the OpenAI API format.
Removed the direct prompt-to-message conversion in generate and generate_stream methods, replacing it with the new parse_messages utility. Also reorganized the impl blocks for better code organization.
- Remove unused imports and redundant session progress tracking
- Reorder session progress check to after initial validation
- Replace `summarize` with `generate` for LLM interaction
- Add more detailed logging for summarization process
- Improve error handling and fallback behavior
- Move session cleanup guard to end of processing
- Update log levels for better observability (trace -> info for key events)
The changes streamline the prompt compaction flow and improve reliability while maintaining the same core functionality.
- Renamed `execute_compact_prompt` to `compact_prompt_for_bots` and simplified logic
- Removed redundant comments and empty lines in test files
- Consolidated prompt compaction threshold handling
- Cleaned up UI logging implementation by removing unnecessary whitespace
- Improved code organization in ui_tree module
The changes focus on code quality improvements, removing clutter, and making the prompt compaction logic more straightforward. Test files were cleaned up to be more concise.
Modified compact_prompt_for_bot to only include the most recent N messages (messages_since_summary + 1) when building the compacted prompt string. This prevents excessive context from being included and improves performance by
Added a trace-level log statement to output the constructed LLM prompt in BotOrchestrator. This helps with debugging by making the prompt content visible in logs when trace logging is enabled. The change maintains existing functionality while improving observability.
Refactor the compact prompt scheduler to use proper indentation and improve error logging. Added more detailed error messages for prompt compaction failures and included bot_id in error logs. The changes make the code more maintainable and debugging easier while maintaining the same functionality.
Added functionality to generate secure passwords for database and drive server credentials during bootstrap. Removed the PostgreSQL running check and auto-start logic as it's no longer needed. Renamed `create_s3_operator` to more descriptive `get_drive_client`. The bootstrap process now automatically sets up required environment variables in .env file including database URL and drive server credentials.
- Added 30-second timeout for S3 bucket listing operations in DriveMonitor
- Removed unused `use_ssl` flag from DriveConfig and cleaned up imports
- Improved error handling with proper logging for timeout scenarios
- Fixed syntax in AppConfig initialization (added missing commas)
- Added proper spacing between methods in BootstrapManager
- Removed unused `id` and `app_state` fields from `ChatPanel`; updated constructor to accept but ignore the state, reducing memory footprint.
- Switched database access in `ChatPanel` from a raw `Mutex` lock to a connection pool (`app_state.conn.get()`), improving concurrency and error handling.
- Reordered and cleaned up imports in `status_panel.rs` and formatted struct fields for readability.
- Updated VS Code launch configuration to pass `--noui` argument, enabling headless mode for debugging.
- Bumped several crate versions in `Cargo.lock` (e.g., `bitflags` to 2.10.0, `syn` to 2.0.108, `cookie` to 0.16.2) and added the new `ashpd` dependency, aligning the project with latest library releases.
Uncommented bootstrap and package_manager directories in add-req.sh to include them in build process. Refactored bootstrap module for cleaner initialization and improved component handling logic.
The warning log was removed from the error case in has_nvidia_gpu() function
as it was producing false positives. The function now silently returns false
when nvidia-smi is not available or no NVIDIA GPU is detected, which is the
expected behavior for the fallback case.
Extract progress bar rendering and warning message display from BotOrchestrator into a dedicated BotUI module. This improves code organization by separating UI concerns from core bot logic. The UI module handles both progress visualization with system metrics and warning message presentation, providing a cleaner interface for output operations.
Added new dependencies for desktop UI support including color-eyre, crossterm, and ratatui. Updated existing dependencies and modified Cargo.toml to include a new 'desktop' feature flag. Also cleaned up the contributors list and modified the add-req.sh script to focus on core bot functionality.
The desktop UI support enables better terminal-based interfaces while the dependency updates ensure compatibility and security. The script changes reflect a shift in focus areas for the project.
Update the LLM server command construction to include a new `--reasoning-format deepseek` argument, enabling explicit selection of the DeepSeek reasoning format. Replace the short `-ngl` flag with the more descriptive `--n-gpu-layers` to improve readability and consistency with other CLI options. This change enhances configurability for models requiring specific reasoning formats and clarifies GPU layer configuration.
Add `info!` statements that output the exact command used to launch the LLM server on both Windows and Unix platforms. This enhances observability and aids debugging by showing the constructed command line before the process is spawned.
- Added retrieval of `llm-server-reasoning-format` configuration in `src/llm/local.rs`.
- When the config value is non‑empty, the server start command now includes `--reasoning-format <value>`.
- Updated argument construction to conditionally append the new flag.
- Cleaned up `src/automation/mod.rs` by removing an unused `std::sync::Arc` import, simplifying the module and eliminating a dead dependency.
The default LLM service URL was changed from `http://localhost:8081/` to `http://localhost:8081`.
Both the configuration lookup default and the fallback string are updated to omit the trailing slash. This prevents accidental double‑slashes when constructing request paths and aligns the default with expected endpoint formatting.
Changed the fallback LLM service URL from `http://localhost:8081/v1` to `http://localhost:8081/`. This aligns the default endpoint with the updated API that no longer requires the `/v1` path, ensuring the application connects correctly when no custom configuration is provided.
- Reordered imports for clarity (chrono and tokio::time::Instant).
- Fixed comment indentation around compact automation note.
- Refactored session history retrieval to acquire the mutex only briefly, then process compacted message skipping and history limiting outside the lock.
- Added explanatory comments for the new lock handling logic.
- Cleaned up token progress calculation and display formatting, improving readability of GPU/CPU/TOKENS bars.
- Minor formatting adjustments throughout the file.
Updated the 6.0.4 migration to use `http://localhost:8081/v1` for the default OpenAI model configurations (gpt‑4 and gpt‑3.5‑turbo) and the local embed service. Adjusted `OpenAIClient` to default to the same localhost base URL instead of the production OpenAI API.
Reorganized imports and module ordering in `src/main.rs` (moved `mod llm`, `mod nvidia`, and `BotOrchestrator` import), cleaned up formatting, and removed unused imports. These changes streamline development by directing LLM calls to a local server and improve code readability.
Removed the conversation history loading logic in `BotOrchestrator` and replaced it with a placeholder string, commenting out related prompt construction and tracing. This change streamlines prompt generation while debugging and prevents unnecessary history processing.
In the local LLM server setup, eliminated the `llm-server-ctx-size` configuration and its corresponding command‑line argument, as the context size parameter is no longer required. This simplifies server initialization and avoids passing an unused flag.
Adjusted the command strings used to start the LLM and embedding servers on both Windows and Unix.
- Replaced the previous log redirection `../../../../logs/llm/stdout.log` with simpler local files (`llm-stdout.log` and `stdout.log`).
- Updated both normal and embedding server launch commands to use the new paths.
This change simplifies log management, ensures logs are correctly written regardless of the working directory, and resolves issues where the previous relative path could be invalid or inaccessible.
- Updated `execute_compact_prompt` to accept an `Arc<AppState>` instead of creating a new default state, enabling proper state sharing across tasks.
- Adjusted bot orchestration to clone and pass the existing `AppState` to the automation task, ensuring the same connection and configuration are used.
- Removed the `Default` implementation for `AppState`, preventing accidental creation of a default state with hard‑coded DB connections and services.
- Modified `BotOrchestrator::default` to panic, enforcing explicit construction via `BotOrchestrator::new(state)` for clearer dependency injection.
These changes improve testability, avoid hidden side‑effects from default state initialization, and ensure consistent use of the application state throughout the system.
Add logic to save user messages to session history for better traceability and context continuity. Simplify session creation error handling and remove redundant warning on closed response channel. Update README with guidance on maintaining production-ready source code.
Renamed PostgreSQL references to "Tables" for clarity in bootstrap logs, changed config sync logging from info to trace for reduced verbosity, and made session message clearing method private to limit external access.
Use `print!` with stdout flush for smoother in-place GPU/CPU/token progress updates in the bot module. Simplify context indicator logic in the web UI by always removing visibility class to streamline behavior.
Refactored prompt compaction to use a special compacted message type (9) instead of clearing old messages. Added support for forced compaction when threshold is negative and updated history retrieval to skip messages before the last compacted marker. This improves efficiency and preserves summary continuity.
Added a check in `BootstrapManager` to detect if PostgreSQL is running and attempt to start the "tables" component automatically if not. Also prefixed unused variables and struct fields with underscores in compiler, session, and model modules to suppress warnings and improve code clarity.
Added `once_cell` and `scopeguard` dependencies to implement thread-safe compaction lock mechanism. Modified `compact_prompt_for_bot` to:
- Prevent concurrent compaction for the same bot using a global lock
- Add proper tracing and error handling
- Improve summarization with content filtering
- Clean up locks automatically using scopeguard
- Remove redundant threshold check and compact entire history
The changes ensure thread safety during prompt compaction and provide better observability through tracing.
- Added initial 30s delay to compact prompt scheduler
- Implemented async LLM summarization for conversation history
- Reduced lock contention by minimizing critical sections
- Added fallback to original text if summarization fails
- Updated README with guidance for failed requirements
- Added new `summarize` method to LLMProvider trait
- Improved session manager query with proper DSL usage
The changes optimize the prompt compaction process by:
1. Reducing lock contention through better resource management
2. Adding LLM-based summarization for better conversation compression
3. Making the system more resilient with proper error handling
4. Improving documentation for development practices
Added new compact_prompt module and its scheduler initialization in AutomationService.
Refactored code for better readability:
- Improved import organization
- Fixed indentation in schedule checking logic
- Enhanced error handling with more descriptive messages
- Formatted long lines for better readability
- Added comments for clarity
The changes maintain existing functionality while making the code more maintainable.
Expanded README with detailed feature matrix and enterprise capabilities for the self-host AI automation platform. Simplified setup instructions by removing redundant configuration and build steps to improve clarity and onboarding experience.
- Update RUST_LOG configuration in launch.json to include trace level and additional module filters
- Uncomment and enable multiple directories in add-req.sh script
- Add execute_compact_prompt function to automation module
- Extend BasicCompiler comment detection to handle single quotes
- Modify BotOrchestrator system message prefix from "SYSTEM" to "SYS"
- Add placeholder for compact prompt automation in BotOrchestrator initialization
Changes improve debugging capabilities and enable previously commented-out automation features while maintaining existing functionality.
Add the `cron` crate (v0.15.0) to Cargo.toml and Cargo.lock to enable scheduling capabilities.
Introduce a new `broadcast_theme_change` helper in `src/automation/mod.rs` that parses CSV theme data and pushes JSON theme update events to all active response channels.
Clean up unused imports in the automation module and add `ConfigManager` import for future configuration handling.
Update `add-req.sh` to adjust the list of processed directories (comment out `auth`, enable `basic`, `config`, `context`, and `drive_monitor`).
These changes lay groundwork for scheduled tasks and dynamic theme updates across the application.
- Updated botserver version from 6.0.5 to 6.0.7 in Cargo.toml and Cargo.lock
- Removed old Rodrigo Rodriguez entry from authors list
- Added new Rodrigo Rodriguez entry with updated email
- Maintained all other existing authors in the list
Added support for configurable conversation history limits through bot configuration. The bot now reads 'prompt-history' from config (defaulting to -1 for unlimited) and trims the conversation history accordingly before generating prompts. Updated announcements bot template to use history limit of 2 messages instead of the previous compact setting.
Removed `IF NOT EXISTS` from the unique constraint in `system_automations` to ensure proper enforcement, and deleted the unused `floatLogo` click event listener to clean up UI behavior.
Added new configuration options for theme colors (green, yellow) and a custom logo URL to enhance branding and visual customization in announcement templates.
Updated the regex pattern in DeepseekR3Handler to use (?s) flag for dot-matches-newline behavior when removing <think> tags. Added comprehensive test case that verifies the handler correctly processes content with multiline think tags. Also made styling changes to the web interface, though the full diff was truncated.
- Simplified build_llm_prompt by removing redundant formatting
- Added info logging for LLM model and processed content
- Updated README with development philosophy note
- Adjusted announcement schedule timing from 55 to 59 minutes past the hour
Mark current_tokens and max_tokens parameters as unused in get_system_metrics function by prefixing them with underscores. This change clarifies that these parameters are intentionally unused in the function implementation while maintaining the function signature for potential future use.
Removed several unused dependencies from Cargo.lock including:
- auto_generate_cdp
- headless_chrome
- scraper
- cssparser and related crates
- dtoa and dtoa-short
- string_cache and related crates
- tendril
- tungstenite 0.27.0
Also updated ureq dependency to single version (removed duplicate entry). This cleanup reduces the dependency tree and removes unused code.
- Update prompt formatting in BotOrchestrator to use clearer labels (SYSTEM/CONTEXT) with emphasis markers
- Remove unused token_ratio field from SystemMetrics struct
- Increase default context size (2048->4096) and prediction length (512->1024) in config
- Clean up metrics calculation by removing redundant token ratio computation
The changes improve readability of system prompts and simplify metrics collection while increasing default model capacity.
Added the sysinfo crate (v0.37.2) to gather system metrics. This includes:
- New dependencies: sysinfo, ntapi, objc2-core-foundation, objc2-io-kit
- Updated windows-core to specific version 0.62.2
- Initial system metrics integration in bot module
The change enables monitoring system resources which will be used for performance optimization and health monitoring.
- Add 'keyword' to LLM processing log message for better context
- Replace simple string replace with regex for removing <think> tags in DeepseekR3 model
- The changes provide more precise logging and more robust content processing
- Added migration 6.0.6 to enforce a unique constraint on `(bot_id, kind, param)` in `system_automations`, preventing “no unique or exclusion constraint matching the ON CONFLICT specification” errors, and created a supporting index.
- Added migration 6.0.7 to replace the `clicks` table with a correctly defined primary key and a unique `(campaign_id, email)` constraint, satisfying Diesel
Added tracking of previously scheduled scripts using a `HashSet` and initialized it in `BasicCompiler::new`. Updated `compile_file` and `preprocess_basic` to require mutable access, allowing schedule cleanup before processing. Implemented logic to delete existing scheduled automations for a script using Diesel queries, ensuring old schedules are removed when a script is recompiled without a `SET_SCHEDULE`. Added necessary Diesel imports and `TriggerKind` reference. This prevents duplicate or orphaned scheduled tasks.
Changed the set_schedule function to first attempt updating existing records before inserting new ones. This improves efficiency by avoiding unnecessary insert conflicts and subsequent updates. The logic now:
1. Tries to update matching existing schedule first
2. Only performs insert if no matching record was found
3. Maintains same functionality but with better performance
- Extend `system_automations` with a non‑null `bot_id` UUID column, create an index on it, and add a unique constraint on `(bot_id, kind, param)` to support upserts.
- Add a unique constraint on `bot_configuration.config_key` to prevent duplicate configuration keys.
- Include migration guards to ensure the new constraint is only created once.
- Remove automatic writing of drive configuration to a `.env` file, cleaning up side‑effects during config loading.
- Change database connection handling to require `DATABASE_URL` to be set (no fallback), making the environment initialization explicit.