- Changed default feature to include 'desktop' in Cargo.toml
- Replaced --noui flag with --desktop flag in launch.json
- Added Tauri desktop mode implementation in main.rs
- Simplified command line argument handling
- Cleaned up code formatting in main.rs
The changes introduce a new mode for running the application as a desktop app using Tauri framework, while maintaining the existing server functionality. The desktop mode loads a webview window with a specific HTML interface.
Removed commented-out code for deprecated LLM server arguments (n_moe, parallel, cont_batching, etc.) since these are no longer used. Also cleaned up the model arguments string by removing --jinja and --flash-attn flags which were moved to TODO comments for future config implementation. The change simplifies the server startup code while maintaining core functionality.
- Bump version from 6.0.7 to 6.0.8 in Cargo.toml and Cargo.lock
- Refactor desktop feature to use explicit dependency syntax
- Remove outdated open source tools list from README-6.md
- Changed DOCTYPE to lowercase for HTML5 compliance
- Removed redundant CSS and JavaScript code
- Simplified theme variables and styling
- Improved message processing logic
- Added better event management
- Streamlined UI components
The changes focus on code cleanliness, performance improvements, and maintainability while preserving all functionality. The HTML structure is now more semantic and follows modern web standards.
Added the --jinja flag to the LLM server startup arguments to enable Jinja template support. This allows for more flexible prompt formatting when using the local LLM server. The change maintains all existing functionality while adding the new feature.
Removed the redundant `--verbose` flag from Windows command since it's not needed. Standardized log file names to `llm-stdout.log` and `llmembd-stdout.log` for consistency across platforms. This makes log management simpler and more predictable.
Added the `--flash-attn on` flag to the LLM server startup arguments to enable flash attention optimization. This improves performance while maintaining existing parameters (top_p, temp, repeat-penalty). A TODO was added to move these parameters to config for better maintainability.
Updated the parameter name from 'n-ctx-size' to 'ctx-size' in both config lookup and argument formatting for consistency. This change aligns with the naming convention used elsewhere in the codebase and makes the parameter name more concise while maintaining clarity. The functionality remains unchanged.
Changed the config key 'llm-server-n_ctx_size' to 'llm-server-n-ctx-size' in local.rs to maintain consistent hyphen-separated naming convention across configuration parameters. This improves code readability and aligns with existing naming patterns.
Added support for configuring the context window size (n_ctx_size) when starting the local LLM server. The parameter is read from config with a default value of 4096 if not specified. This allows for better control over the model's memory usage and performance characteristics.
The check_gbot function call in DriveMonitor's run method has been commented out with a TODO note, indicating it's deprecated and should be removed. This is likely part of cleaning up unused or outdated functionality while keeping the codebase functional. The gbdialog changes check remains active.
- Remove trace logs in compact_prompt.rs that were cluttering logs without adding value
- Simplify LLM server args in local.rs by removing redundant --reasoning-format parameter
- Add ID to float menu div in index.html for better DOM targeting
- Clean up code by removing unnecessary debug logging while maintaining functionality
- Increased schedule field size from bpchar(12) to bpchar(20) in database schema
- Reduced task checking interval from 60s to 5s for more responsive automation
- Improved error handling for schedule parsing and execution
- Added proper error logging for automation failures
- Changed automation execution to use bot_id instead of nil UUID
- Enhanced HEAR keyword functionality (partial diff shown)
The change adds `arg("true")` to the shell command to prevent executing an empty shell command when a component is already running. This ensures a valid command is always passed to the shell, avoiding potential issues with empty command execution.
The trace log for successful component installation was removed as it was deemed unnecessary. The success of the installation is already indicated by the Ok(()) return value, making the log redundant. This change simplifies the code while maintaining the same functionality.
Fix incorrect variable reference in package manager installer. The code was using `C&component.env_vars` instead of `&component.env_vars` when iterating through environment variables. This would cause compilation errors. The fix properly references the component's env_vars field when evaluating environment variable references.
Added diesel_migrations crate (v2.3.0) to enable database migration functionality. Updated Cargo.toml and Cargo.lock to include the new dependency along with its required sub-dependencies (migrations_internals and migrations_macros). Also made minor cleanups in the codebase:
- Removed unused UI code from platform README
- Cleaned up LLM server initialization code
- Added additional build dependencies in documentation
Remove hardcoded DRIVE_ACCESSKEY/SECRET env vars and replace with variable references ($DRIVE_USER, $DRIVE_ACCESSKEY). Added logic to evaluate environment variable references in command execution by expanding $VAR references to their actual values from the environment. This makes the configuration more flexible and secure by avoiding hardcoded credentials.
Include retrieval and passing of `llm-key` from configuration to LLM provider methods for secure authentication. Also refine role naming in compact prompts and remove unused logging import.
Include model parameter in LLM provider calls across automation, bot, and keyword modules to ensure correct model selection based on configuration. This improves flexibility and consistency in LLM usage.
Removed the legacy TABLES_SERVER environment variable check and related database connection logic. Simplified the bootstrap process to always generate new credentials and write them to .env file. Also updated drive monitor log message to use "Drive" instead of "S3" for consistency. #464
Refactored the prompt construction in compact_prompt.rs to use a single formatted string instead of multiple JSON messages. The conversation is now built as a single string with clear formatting markers, and the role names are more readable (User/Bot instead of user/bot). Also removed a trailing slash from the OpenAI API endpoint URL in llm/mod.rs for consistency.
The changes improve readability of the prompt structure and ensure consistent API endpoint formatting. The summarization request is more clearly formatted for the LLM while maintaining the same functionality.
- Removed unused token parameters from get_system_metrics function
- Simplified metrics collection in BotOrchestrator by removing initial token check
- Improved StatusPanel by:
- Removing 1-second update throttle
- Refreshing CPU usage more efficiently
- Separating metrics collection from rendering
- Using direct CPU measurement from sysinfo
- Cleaned up unused imports and improved code organization
The changes make the system monitoring more straightforward and efficient while maintaining all functionality.
Refactored the compact_prompt_for_bots function to use structured JSON messages instead of plain text formatting. Removed unused execute_compact_prompt method and related code from automation service as the functionality is now handled elsewhere. The changes include:
- Using serde_json to structure messages for LLM
- Improved error handling and fallback mechanism
- Cleaned up obsolete compact prompt execution code
Added parse_messages method to handle structured prompt input for OpenAI API. The method converts human/bot/compact prefixes to appropriate OpenAI roles (user/assistant/system) and properly formats multi-line messages. This enables more complex conversation structures in prompts while maintaining compatibility with the OpenAI API format.
Removed the direct prompt-to-message conversion in generate and generate_stream methods, replacing it with the new parse_messages utility. Also reorganized the impl blocks for better code organization.
- Remove unused imports and redundant session progress tracking
- Reorder session progress check to after initial validation
- Replace `summarize` with `generate` for LLM interaction
- Add more detailed logging for summarization process
- Improve error handling and fallback behavior
- Move session cleanup guard to end of processing
- Update log levels for better observability (trace -> info for key events)
The changes streamline the prompt compaction flow and improve reliability while maintaining the same core functionality.
- Renamed `execute_compact_prompt` to `compact_prompt_for_bots` and simplified logic
- Removed redundant comments and empty lines in test files
- Consolidated prompt compaction threshold handling
- Cleaned up UI logging implementation by removing unnecessary whitespace
- Improved code organization in ui_tree module
The changes focus on code quality improvements, removing clutter, and making the prompt compaction logic more straightforward. Test files were cleaned up to be more concise.
Modified compact_prompt_for_bot to only include the most recent N messages (messages_since_summary + 1) when building the compacted prompt string. This prevents excessive context from being included and improves performance by
Added a trace-level log statement to output the constructed LLM prompt in BotOrchestrator. This helps with debugging by making the prompt content visible in logs when trace logging is enabled. The change maintains existing functionality while improving observability.
Refactor the compact prompt scheduler to use proper indentation and improve error logging. Added more detailed error messages for prompt compaction failures and included bot_id in error logs. The changes make the code more maintainable and debugging easier while maintaining the same functionality.
Added functionality to generate secure passwords for database and drive server credentials during bootstrap. Removed the PostgreSQL running check and auto-start logic as it's no longer needed. Renamed `create_s3_operator` to more descriptive `get_drive_client`. The bootstrap process now automatically sets up required environment variables in .env file including database URL and drive server credentials.
- Added 30-second timeout for S3 bucket listing operations in DriveMonitor
- Removed unused `use_ssl` flag from DriveConfig and cleaned up imports
- Improved error handling with proper logging for timeout scenarios
- Fixed syntax in AppConfig initialization (added missing commas)
- Added proper spacing between methods in BootstrapManager
- Removed unused `id` and `app_state` fields from `ChatPanel`; updated constructor to accept but ignore the state, reducing memory footprint.
- Switched database access in `ChatPanel` from a raw `Mutex` lock to a connection pool (`app_state.conn.get()`), improving concurrency and error handling.
- Reordered and cleaned up imports in `status_panel.rs` and formatted struct fields for readability.
- Updated VS Code launch configuration to pass `--noui` argument, enabling headless mode for debugging.
- Bumped several crate versions in `Cargo.lock` (e.g., `bitflags` to 2.10.0, `syn` to 2.0.108, `cookie` to 0.16.2) and added the new `ashpd` dependency, aligning the project with latest library releases.
Uncommented bootstrap and package_manager directories in add-req.sh to include them in build process. Refactored bootstrap module for cleaner initialization and improved component handling logic.
The warning log was removed from the error case in has_nvidia_gpu() function
as it was producing false positives. The function now silently returns false
when nvidia-smi is not available or no NVIDIA GPU is detected, which is the
expected behavior for the fallback case.
Extract progress bar rendering and warning message display from BotOrchestrator into a dedicated BotUI module. This improves code organization by separating UI concerns from core bot logic. The UI module handles both progress visualization with system metrics and warning message presentation, providing a cleaner interface for output operations.
Added new dependencies for desktop UI support including color-eyre, crossterm, and ratatui. Updated existing dependencies and modified Cargo.toml to include a new 'desktop' feature flag. Also cleaned up the contributors list and modified the add-req.sh script to focus on core bot functionality.
The desktop UI support enables better terminal-based interfaces while the dependency updates ensure compatibility and security. The script changes reflect a shift in focus areas for the project.
Update the LLM server command construction to include a new `--reasoning-format deepseek` argument, enabling explicit selection of the DeepSeek reasoning format. Replace the short `-ngl` flag with the more descriptive `--n-gpu-layers` to improve readability and consistency with other CLI options. This change enhances configurability for models requiring specific reasoning formats and clarifies GPU layer configuration.
Add `info!` statements that output the exact command used to launch the LLM server on both Windows and Unix platforms. This enhances observability and aids debugging by showing the constructed command line before the process is spawned.
- Added retrieval of `llm-server-reasoning-format` configuration in `src/llm/local.rs`.
- When the config value is non‑empty, the server start command now includes `--reasoning-format <value>`.
- Updated argument construction to conditionally append the new flag.
- Cleaned up `src/automation/mod.rs` by removing an unused `std::sync::Arc` import, simplifying the module and eliminating a dead dependency.
The default LLM service URL was changed from `http://localhost:8081/` to `http://localhost:8081`.
Both the configuration lookup default and the fallback string are updated to omit the trailing slash. This prevents accidental double‑slashes when constructing request paths and aligns the default with expected endpoint formatting.
Changed the fallback LLM service URL from `http://localhost:8081/v1` to `http://localhost:8081/`. This aligns the default endpoint with the updated API that no longer requires the `/v1` path, ensuring the application connects correctly when no custom configuration is provided.
- Reordered imports for clarity (chrono and tokio::time::Instant).
- Fixed comment indentation around compact automation note.
- Refactored session history retrieval to acquire the mutex only briefly, then process compacted message skipping and history limiting outside the lock.
- Added explanatory comments for the new lock handling logic.
- Cleaned up token progress calculation and display formatting, improving readability of GPU/CPU/TOKENS bars.
- Minor formatting adjustments throughout the file.
Updated the 6.0.4 migration to use `http://localhost:8081/v1` for the default OpenAI model configurations (gpt‑4 and gpt‑3.5‑turbo) and the local embed service. Adjusted `OpenAIClient` to default to the same localhost base URL instead of the production OpenAI API.
Reorganized imports and module ordering in `src/main.rs` (moved `mod llm`, `mod nvidia`, and `BotOrchestrator` import), cleaned up formatting, and removed unused imports. These changes streamline development by directing LLM calls to a local server and improve code readability.
Removed the conversation history loading logic in `BotOrchestrator` and replaced it with a placeholder string, commenting out related prompt construction and tracing. This change streamlines prompt generation while debugging and prevents unnecessary history processing.
In the local LLM server setup, eliminated the `llm-server-ctx-size` configuration and its corresponding command‑line argument, as the context size parameter is no longer required. This simplifies server initialization and avoids passing an unused flag.
Adjusted the command strings used to start the LLM and embedding servers on both Windows and Unix.
- Replaced the previous log redirection `../../../../logs/llm/stdout.log` with simpler local files (`llm-stdout.log` and `stdout.log`).
- Updated both normal and embedding server launch commands to use the new paths.
This change simplifies log management, ensures logs are correctly written regardless of the working directory, and resolves issues where the previous relative path could be invalid or inaccessible.
- Updated `execute_compact_prompt` to accept an `Arc<AppState>` instead of creating a new default state, enabling proper state sharing across tasks.
- Adjusted bot orchestration to clone and pass the existing `AppState` to the automation task, ensuring the same connection and configuration are used.
- Removed the `Default` implementation for `AppState`, preventing accidental creation of a default state with hard‑coded DB connections and services.
- Modified `BotOrchestrator::default` to panic, enforcing explicit construction via `BotOrchestrator::new(state)` for clearer dependency injection.
These changes improve testability, avoid hidden side‑effects from default state initialization, and ensure consistent use of the application state throughout the system.
Add logic to save user messages to session history for better traceability and context continuity. Simplify session creation error handling and remove redundant warning on closed response channel. Update README with guidance on maintaining production-ready source code.