Revamps editor.css to introduce a modern 3DBevel-inspired visual theme, adds structured variables, and implements new UI components including title bar, ribbon tabs, and quick access toolbar for improved usability and maintainability.
Refactored editor.page.html to use a Vue-style `data()` function for reactive state, adding a new `content` property and cleaning up redundant inline styles. Updated profile-form.html to replace single `error` handling with field-specific `errors.<field>` bindings, improving form validation clarity and user feedback.
Update dashboard CSS to use new color scheme matching visual identity, replacing CSS variables with specific color values. Improved button hover state with background transition instead of opacity.
Expanded layout.js with additional application sections including dashboard, editor, player, and settings to support new navigation structure.
Added new navigation links for Dashboard, Editor, Player, Paper, Settings, Tables, and News sections. Each link includes click handlers to switch sections and active state styling. This expands the application's navigation options for better user access to different features.
Added actix-files and its dependencies (http-range, mime_guess, unicase, v_htmlescape) to enable static file functionality in the botserver. This will allow serving static assets and files through the web server. The change includes all required transitive dependencies for proper file handling and MIME type detection.
- Consolidated CSS and JS assets by moving them to local files (app.css, gsap.min.js, marked.min.js)
- Removed livekit-client CDN dependency as it appears unused
- Moved navbar logic to separate layout.js file for better organization
- Changed navigation links to use hash-based routing (#chat, #drive, etc)
- Removed redundant navbar template fetching in favor of static inclusion
- Simplified HTML structure by removing commented code and redundant elements
These changes improve maintainability and performance by reducing external dependencies and better organizing frontend assets.
- Added HTTP server with CORS support and various endpoints
- Introduced http_tx/http_rx channels for HTTP server control
- Cleaned up build.rs by removing commented code
- Updated .gitignore to use *.rdb pattern instead of .rdb
- Simplified capabilities.json to empty object
- Improved UI initialization with better error handling
- Reorganized module imports in main.rs
- Added worker count configuration for HTTP server
The changes introduce a new HTTP server capability while cleaning up and improving existing code structure. The HTTP server includes authentication, session management, and websocket support.
Changed the default webview URL from 'tables.html' to 'index.html' in main.rs to reflect the new entry point. Updated the frontend distribution path in tauri.conf.json from './web/desktop' to './web/html' to better represent the directory structure. These changes align with recent frontend reorganization.
The path to tables.html in WebviewWindowBuilder was incorrectly set to "../web/desktop/tables.html". This was fixed to use the correct relative path "tables.html" to ensure the webview loads the file from the proper location. The change maintains the same functionality while using the correct path structure.
- Changed default feature to include 'desktop' in Cargo.toml
- Replaced --noui flag with --desktop flag in launch.json
- Added Tauri desktop mode implementation in main.rs
- Simplified command line argument handling
- Cleaned up code formatting in main.rs
The changes introduce a new mode for running the application as a desktop app using Tauri framework, while maintaining the existing server functionality. The desktop mode loads a webview window with a specific HTML interface.
Removed commented-out code for deprecated LLM server arguments (n_moe, parallel, cont_batching, etc.) since these are no longer used. Also cleaned up the model arguments string by removing --jinja and --flash-attn flags which were moved to TODO comments for future config implementation. The change simplifies the server startup code while maintaining core functionality.
- Bump version from 6.0.7 to 6.0.8 in Cargo.toml and Cargo.lock
- Refactor desktop feature to use explicit dependency syntax
- Remove outdated open source tools list from README-6.md
- Changed DOCTYPE to lowercase for HTML5 compliance
- Removed redundant CSS and JavaScript code
- Simplified theme variables and styling
- Improved message processing logic
- Added better event management
- Streamlined UI components
The changes focus on code cleanliness, performance improvements, and maintainability while preserving all functionality. The HTML structure is now more semantic and follows modern web standards.
Added the --jinja flag to the LLM server startup arguments to enable Jinja template support. This allows for more flexible prompt formatting when using the local LLM server. The change maintains all existing functionality while adding the new feature.
Removed the redundant `--verbose` flag from Windows command since it's not needed. Standardized log file names to `llm-stdout.log` and `llmembd-stdout.log` for consistency across platforms. This makes log management simpler and more predictable.
Added the `--flash-attn on` flag to the LLM server startup arguments to enable flash attention optimization. This improves performance while maintaining existing parameters (top_p, temp, repeat-penalty). A TODO was added to move these parameters to config for better maintainability.
Updated the parameter name from 'n-ctx-size' to 'ctx-size' in both config lookup and argument formatting for consistency. This change aligns with the naming convention used elsewhere in the codebase and makes the parameter name more concise while maintaining clarity. The functionality remains unchanged.
Changed the config key 'llm-server-n_ctx_size' to 'llm-server-n-ctx-size' in local.rs to maintain consistent hyphen-separated naming convention across configuration parameters. This improves code readability and aligns with existing naming patterns.
Added support for configuring the context window size (n_ctx_size) when starting the local LLM server. The parameter is read from config with a default value of 4096 if not specified. This allows for better control over the model's memory usage and performance characteristics.
The check_gbot function call in DriveMonitor's run method has been commented out with a TODO note, indicating it's deprecated and should be removed. This is likely part of cleaning up unused or outdated functionality while keeping the codebase functional. The gbdialog changes check remains active.
- Remove trace logs in compact_prompt.rs that were cluttering logs without adding value
- Simplify LLM server args in local.rs by removing redundant --reasoning-format parameter
- Add ID to float menu div in index.html for better DOM targeting
- Clean up code by removing unnecessary debug logging while maintaining functionality
- Increased schedule field size from bpchar(12) to bpchar(20) in database schema
- Reduced task checking interval from 60s to 5s for more responsive automation
- Improved error handling for schedule parsing and execution
- Added proper error logging for automation failures
- Changed automation execution to use bot_id instead of nil UUID
- Enhanced HEAR keyword functionality (partial diff shown)
The change adds `arg("true")` to the shell command to prevent executing an empty shell command when a component is already running. This ensures a valid command is always passed to the shell, avoiding potential issues with empty command execution.
The trace log for successful component installation was removed as it was deemed unnecessary. The success of the installation is already indicated by the Ok(()) return value, making the log redundant. This change simplifies the code while maintaining the same functionality.
Fix incorrect variable reference in package manager installer. The code was using `C&component.env_vars` instead of `&component.env_vars` when iterating through environment variables. This would cause compilation errors. The fix properly references the component's env_vars field when evaluating environment variable references.
Added diesel_migrations crate (v2.3.0) to enable database migration functionality. Updated Cargo.toml and Cargo.lock to include the new dependency along with its required sub-dependencies (migrations_internals and migrations_macros). Also made minor cleanups in the codebase:
- Removed unused UI code from platform README
- Cleaned up LLM server initialization code
- Added additional build dependencies in documentation
Remove hardcoded DRIVE_ACCESSKEY/SECRET env vars and replace with variable references ($DRIVE_USER, $DRIVE_ACCESSKEY). Added logic to evaluate environment variable references in command execution by expanding $VAR references to their actual values from the environment. This makes the configuration more flexible and secure by avoiding hardcoded credentials.
Include retrieval and passing of `llm-key` from configuration to LLM provider methods for secure authentication. Also refine role naming in compact prompts and remove unused logging import.
Include model parameter in LLM provider calls across automation, bot, and keyword modules to ensure correct model selection based on configuration. This improves flexibility and consistency in LLM usage.
Removed the legacy TABLES_SERVER environment variable check and related database connection logic. Simplified the bootstrap process to always generate new credentials and write them to .env file. Also updated drive monitor log message to use "Drive" instead of "S3" for consistency. #464
Refactored the prompt construction in compact_prompt.rs to use a single formatted string instead of multiple JSON messages. The conversation is now built as a single string with clear formatting markers, and the role names are more readable (User/Bot instead of user/bot). Also removed a trailing slash from the OpenAI API endpoint URL in llm/mod.rs for consistency.
The changes improve readability of the prompt structure and ensure consistent API endpoint formatting. The summarization request is more clearly formatted for the LLM while maintaining the same functionality.
- Removed unused token parameters from get_system_metrics function
- Simplified metrics collection in BotOrchestrator by removing initial token check
- Improved StatusPanel by:
- Removing 1-second update throttle
- Refreshing CPU usage more efficiently
- Separating metrics collection from rendering
- Using direct CPU measurement from sysinfo
- Cleaned up unused imports and improved code organization
The changes make the system monitoring more straightforward and efficient while maintaining all functionality.
Refactored the compact_prompt_for_bots function to use structured JSON messages instead of plain text formatting. Removed unused execute_compact_prompt method and related code from automation service as the functionality is now handled elsewhere. The changes include:
- Using serde_json to structure messages for LLM
- Improved error handling and fallback mechanism
- Cleaned up obsolete compact prompt execution code
Added parse_messages method to handle structured prompt input for OpenAI API. The method converts human/bot/compact prefixes to appropriate OpenAI roles (user/assistant/system) and properly formats multi-line messages. This enables more complex conversation structures in prompts while maintaining compatibility with the OpenAI API format.
Removed the direct prompt-to-message conversion in generate and generate_stream methods, replacing it with the new parse_messages utility. Also reorganized the impl blocks for better code organization.
- Remove unused imports and redundant session progress tracking
- Reorder session progress check to after initial validation
- Replace `summarize` with `generate` for LLM interaction
- Add more detailed logging for summarization process
- Improve error handling and fallback behavior
- Move session cleanup guard to end of processing
- Update log levels for better observability (trace -> info for key events)
The changes streamline the prompt compaction flow and improve reliability while maintaining the same core functionality.
- Renamed `execute_compact_prompt` to `compact_prompt_for_bots` and simplified logic
- Removed redundant comments and empty lines in test files
- Consolidated prompt compaction threshold handling
- Cleaned up UI logging implementation by removing unnecessary whitespace
- Improved code organization in ui_tree module
The changes focus on code quality improvements, removing clutter, and making the prompt compaction logic more straightforward. Test files were cleaned up to be more concise.
Modified compact_prompt_for_bot to only include the most recent N messages (messages_since_summary + 1) when building the compacted prompt string. This prevents excessive context from being included and improves performance by
Added a trace-level log statement to output the constructed LLM prompt in BotOrchestrator. This helps with debugging by making the prompt content visible in logs when trace logging is enabled. The change maintains existing functionality while improving observability.
Refactor the compact prompt scheduler to use proper indentation and improve error logging. Added more detailed error messages for prompt compaction failures and included bot_id in error logs. The changes make the code more maintainable and debugging easier while maintaining the same functionality.
Added functionality to generate secure passwords for database and drive server credentials during bootstrap. Removed the PostgreSQL running check and auto-start logic as it's no longer needed. Renamed `create_s3_operator` to more descriptive `get_drive_client`. The bootstrap process now automatically sets up required environment variables in .env file including database URL and drive server credentials.
- Added 30-second timeout for S3 bucket listing operations in DriveMonitor
- Removed unused `use_ssl` flag from DriveConfig and cleaned up imports
- Improved error handling with proper logging for timeout scenarios
- Fixed syntax in AppConfig initialization (added missing commas)
- Added proper spacing between methods in BootstrapManager
- Removed unused `id` and `app_state` fields from `ChatPanel`; updated constructor to accept but ignore the state, reducing memory footprint.
- Switched database access in `ChatPanel` from a raw `Mutex` lock to a connection pool (`app_state.conn.get()`), improving concurrency and error handling.
- Reordered and cleaned up imports in `status_panel.rs` and formatted struct fields for readability.
- Updated VS Code launch configuration to pass `--noui` argument, enabling headless mode for debugging.
- Bumped several crate versions in `Cargo.lock` (e.g., `bitflags` to 2.10.0, `syn` to 2.0.108, `cookie` to 0.16.2) and added the new `ashpd` dependency, aligning the project with latest library releases.
Uncommented bootstrap and package_manager directories in add-req.sh to include them in build process. Refactored bootstrap module for cleaner initialization and improved component handling logic.
The warning log was removed from the error case in has_nvidia_gpu() function
as it was producing false positives. The function now silently returns false
when nvidia-smi is not available or no NVIDIA GPU is detected, which is the
expected behavior for the fallback case.
Extract progress bar rendering and warning message display from BotOrchestrator into a dedicated BotUI module. This improves code organization by separating UI concerns from core bot logic. The UI module handles both progress visualization with system metrics and warning message presentation, providing a cleaner interface for output operations.
Added new dependencies for desktop UI support including color-eyre, crossterm, and ratatui. Updated existing dependencies and modified Cargo.toml to include a new 'desktop' feature flag. Also cleaned up the contributors list and modified the add-req.sh script to focus on core bot functionality.
The desktop UI support enables better terminal-based interfaces while the dependency updates ensure compatibility and security. The script changes reflect a shift in focus areas for the project.
Update the LLM server command construction to include a new `--reasoning-format deepseek` argument, enabling explicit selection of the DeepSeek reasoning format. Replace the short `-ngl` flag with the more descriptive `--n-gpu-layers` to improve readability and consistency with other CLI options. This change enhances configurability for models requiring specific reasoning formats and clarifies GPU layer configuration.
Add `info!` statements that output the exact command used to launch the LLM server on both Windows and Unix platforms. This enhances observability and aids debugging by showing the constructed command line before the process is spawned.
- Added retrieval of `llm-server-reasoning-format` configuration in `src/llm/local.rs`.
- When the config value is non‑empty, the server start command now includes `--reasoning-format <value>`.
- Updated argument construction to conditionally append the new flag.
- Cleaned up `src/automation/mod.rs` by removing an unused `std::sync::Arc` import, simplifying the module and eliminating a dead dependency.