When llm-server=false in bot_configuration, the code now skips
attempting to start local llama-server processes. This prevents
the 60-attempt timeout error when using external LLM endpoints
or when local LLM serving is intentionally disabled.
The commit adds a complete example environment configuration file
documenting all available settings for BotServer, including logging,
database, server, drive, LLM, Redis, email, and feature flags.
Also removes hardcoded environment variable usage throughout the
codebase, replacing them with configuration via config.csv or
appropriate defaults. This includes:
- WhatsApp, Teams, Instagram adapter configurations
- Weather API key handling
- Email and directory service configurations
- Console feature conditionally compiles monitoring code
- Improved logging configuration with library suppression
Added actix-files and its dependencies (http-range, mime_guess, unicase, v_htmlescape) to enable static file functionality in the botserver. This will allow serving static assets and files through the web server. The change includes all required transitive dependencies for proper file handling and MIME type detection.
Removed commented-out code for deprecated LLM server arguments (n_moe, parallel, cont_batching, etc.) since these are no longer used. Also cleaned up the model arguments string by removing --jinja and --flash-attn flags which were moved to TODO comments for future config implementation. The change simplifies the server startup code while maintaining core functionality.
Added the --jinja flag to the LLM server startup arguments to enable Jinja template support. This allows for more flexible prompt formatting when using the local LLM server. The change maintains all existing functionality while adding the new feature.
Removed the redundant `--verbose` flag from Windows command since it's not needed. Standardized log file names to `llm-stdout.log` and `llmembd-stdout.log` for consistency across platforms. This makes log management simpler and more predictable.
Added the `--flash-attn on` flag to the LLM server startup arguments to enable flash attention optimization. This improves performance while maintaining existing parameters (top_p, temp, repeat-penalty). A TODO was added to move these parameters to config for better maintainability.
Updated the parameter name from 'n-ctx-size' to 'ctx-size' in both config lookup and argument formatting for consistency. This change aligns with the naming convention used elsewhere in the codebase and makes the parameter name more concise while maintaining clarity. The functionality remains unchanged.
Changed the config key 'llm-server-n_ctx_size' to 'llm-server-n-ctx-size' in local.rs to maintain consistent hyphen-separated naming convention across configuration parameters. This improves code readability and aligns with existing naming patterns.
Added support for configuring the context window size (n_ctx_size) when starting the local LLM server. The parameter is read from config with a default value of 4096 if not specified. This allows for better control over the model's memory usage and performance characteristics.
- Remove trace logs in compact_prompt.rs that were cluttering logs without adding value
- Simplify LLM server args in local.rs by removing redundant --reasoning-format parameter
- Add ID to float menu div in index.html for better DOM targeting
- Clean up code by removing unnecessary debug logging while maintaining functionality
Added diesel_migrations crate (v2.3.0) to enable database migration functionality. Updated Cargo.toml and Cargo.lock to include the new dependency along with its required sub-dependencies (migrations_internals and migrations_macros). Also made minor cleanups in the codebase:
- Removed unused UI code from platform README
- Cleaned up LLM server initialization code
- Added additional build dependencies in documentation
Include model parameter in LLM provider calls across automation, bot, and keyword modules to ensure correct model selection based on configuration. This improves flexibility and consistency in LLM usage.
Removed the legacy TABLES_SERVER environment variable check and related database connection logic. Simplified the bootstrap process to always generate new credentials and write them to .env file. Also updated drive monitor log message to use "Drive" instead of "S3" for consistency. #464
Refactored the compact_prompt_for_bots function to use structured JSON messages instead of plain text formatting. Removed unused execute_compact_prompt method and related code from automation service as the functionality is now handled elsewhere. The changes include:
- Using serde_json to structure messages for LLM
- Improved error handling and fallback mechanism
- Cleaned up obsolete compact prompt execution code
- Renamed `execute_compact_prompt` to `compact_prompt_for_bots` and simplified logic
- Removed redundant comments and empty lines in test files
- Consolidated prompt compaction threshold handling
- Cleaned up UI logging implementation by removing unnecessary whitespace
- Improved code organization in ui_tree module
The changes focus on code quality improvements, removing clutter, and making the prompt compaction logic more straightforward. Test files were cleaned up to be more concise.
- Removed unused `id` and `app_state` fields from `ChatPanel`; updated constructor to accept but ignore the state, reducing memory footprint.
- Switched database access in `ChatPanel` from a raw `Mutex` lock to a connection pool (`app_state.conn.get()`), improving concurrency and error handling.
- Reordered and cleaned up imports in `status_panel.rs` and formatted struct fields for readability.
- Updated VS Code launch configuration to pass `--noui` argument, enabling headless mode for debugging.
- Bumped several crate versions in `Cargo.lock` (e.g., `bitflags` to 2.10.0, `syn` to 2.0.108, `cookie` to 0.16.2) and added the new `ashpd` dependency, aligning the project with latest library releases.
Uncommented bootstrap and package_manager directories in add-req.sh to include them in build process. Refactored bootstrap module for cleaner initialization and improved component handling logic.
Update the LLM server command construction to include a new `--reasoning-format deepseek` argument, enabling explicit selection of the DeepSeek reasoning format. Replace the short `-ngl` flag with the more descriptive `--n-gpu-layers` to improve readability and consistency with other CLI options. This change enhances configurability for models requiring specific reasoning formats and clarifies GPU layer configuration.
Add `info!` statements that output the exact command used to launch the LLM server on both Windows and Unix platforms. This enhances observability and aids debugging by showing the constructed command line before the process is spawned.
- Added retrieval of `llm-server-reasoning-format` configuration in `src/llm/local.rs`.
- When the config value is non‑empty, the server start command now includes `--reasoning-format <value>`.
- Updated argument construction to conditionally append the new flag.
- Cleaned up `src/automation/mod.rs` by removing an unused `std::sync::Arc` import, simplifying the module and eliminating a dead dependency.
Removed the conversation history loading logic in `BotOrchestrator` and replaced it with a placeholder string, commenting out related prompt construction and tracing. This change streamlines prompt generation while debugging and prevents unnecessary history processing.
In the local LLM server setup, eliminated the `llm-server-ctx-size` configuration and its corresponding command‑line argument, as the context size parameter is no longer required. This simplifies server initialization and avoids passing an unused flag.
Adjusted the command strings used to start the LLM and embedding servers on both Windows and Unix.
- Replaced the previous log redirection `../../../../logs/llm/stdout.log` with simpler local files (`llm-stdout.log` and `stdout.log`).
- Updated both normal and embedding server launch commands to use the new paths.
This change simplifies log management, ensures logs are correctly written regardless of the working directory, and resolves issues where the previous relative path could be invalid or inaccessible.
- Deduplicate consecutive messages with same role in conversation history
- Add n_predict configuration option for LLM server
- Prevent duplicate message storage in session manager
- Update announcement schedule timing from 37 to 55 minutes
- Add default n_predict value in default bot config
- Refactor cron matching to use individual variables for each time component with additional debug logging
- Replace SETEX with atomic SET NX EX for job locking in Redis
- Add better error handling and logging for job execution tracking
- Skip execution if Redis is unavailable or job is already held
- Add verbose flag to LLM server startup command for better logging
- Added `trace!` logging in `bot_memory.rs` to record retrieved memory values for easier debugging.
- Refactored `BotOrchestrator` in `bot/mod.rs`:
- Removed duplicate session save block and consolidated message persistence.
- Replaced low‑level LLM streaming with a structured `UserMessage` and `stream_response` workflow, improving error handling and readability.
- Updated configuration loading in `config/mod.rs`:
- Imported `get_default_bot` and enhanced `get_config` to fall back to the default bot configuration when the primary query fails.
- Established a fresh DB connection for the fallback path to avoid borrowing issues.
- Updated `BootstrapManager` to use `AppConfig::from_env().expect(...)` and `AppConfig::from_database(...).expect(...)` ensuring failures are explicit rather than silently ignored.
- Refactored error propagation in bootstrap flow to use `?` where appropriate, improving reliability of configuration loading.
- Added import of `llm_models` in `bot` module and introduced `ConfigManager` usage to fetch the LLM model identifier at runtime.
- Integrated dynamic LLM model handler selection via `llm_models::get_handler(&model)`.
- Replaced static environment variable retrieval for embedding configuration with runtime