- Removed unused `id` and `app_state` fields from `ChatPanel`; updated constructor to accept but ignore the state, reducing memory footprint.
- Switched database access in `ChatPanel` from a raw `Mutex` lock to a connection pool (`app_state.conn.get()`), improving concurrency and error handling.
- Reordered and cleaned up imports in `status_panel.rs` and formatted struct fields for readability.
- Updated VS Code launch configuration to pass `--noui` argument, enabling headless mode for debugging.
- Bumped several crate versions in `Cargo.lock` (e.g., `bitflags` to 2.10.0, `syn` to 2.0.108, `cookie` to 0.16.2) and added the new `ashpd` dependency, aligning the project with latest library releases.
Uncommented bootstrap and package_manager directories in add-req.sh to include them in build process. Refactored bootstrap module for cleaner initialization and improved component handling logic.
Update the LLM server command construction to include a new `--reasoning-format deepseek` argument, enabling explicit selection of the DeepSeek reasoning format. Replace the short `-ngl` flag with the more descriptive `--n-gpu-layers` to improve readability and consistency with other CLI options. This change enhances configurability for models requiring specific reasoning formats and clarifies GPU layer configuration.
Add `info!` statements that output the exact command used to launch the LLM server on both Windows and Unix platforms. This enhances observability and aids debugging by showing the constructed command line before the process is spawned.
- Added retrieval of `llm-server-reasoning-format` configuration in `src/llm/local.rs`.
- When the config value is non‑empty, the server start command now includes `--reasoning-format <value>`.
- Updated argument construction to conditionally append the new flag.
- Cleaned up `src/automation/mod.rs` by removing an unused `std::sync::Arc` import, simplifying the module and eliminating a dead dependency.
Updated the 6.0.4 migration to use `http://localhost:8081/v1` for the default OpenAI model configurations (gpt‑4 and gpt‑3.5‑turbo) and the local embed service. Adjusted `OpenAIClient` to default to the same localhost base URL instead of the production OpenAI API.
Reorganized imports and module ordering in `src/main.rs` (moved `mod llm`, `mod nvidia`, and `BotOrchestrator` import), cleaned up formatting, and removed unused imports. These changes streamline development by directing LLM calls to a local server and improve code readability.
Removed the conversation history loading logic in `BotOrchestrator` and replaced it with a placeholder string, commenting out related prompt construction and tracing. This change streamlines prompt generation while debugging and prevents unnecessary history processing.
In the local LLM server setup, eliminated the `llm-server-ctx-size` configuration and its corresponding command‑line argument, as the context size parameter is no longer required. This simplifies server initialization and avoids passing an unused flag.
Adjusted the command strings used to start the LLM and embedding servers on both Windows and Unix.
- Replaced the previous log redirection `../../../../logs/llm/stdout.log` with simpler local files (`llm-stdout.log` and `stdout.log`).
- Updated both normal and embedding server launch commands to use the new paths.
This change simplifies log management, ensures logs are correctly written regardless of the working directory, and resolves issues where the previous relative path could be invalid or inaccessible.
- Added initial 30s delay to compact prompt scheduler
- Implemented async LLM summarization for conversation history
- Reduced lock contention by minimizing critical sections
- Added fallback to original text if summarization fails
- Updated README with guidance for failed requirements
- Added new `summarize` method to LLMProvider trait
- Improved session manager query with proper DSL usage
The changes optimize the prompt compaction process by:
1. Reducing lock contention through better resource management
2. Adding LLM-based summarization for better conversation compression
3. Making the system more resilient with proper error handling
4. Improving documentation for development practices
- Simplified auth module by removing unused imports and code
- Cleaned up shared models by removing unused structs (Organization, User, Bot, etc.)
- Updated add-req.sh to comment out unused directories
- Modified LLM fallback strategy in README with additional notes about model behaviors
The changes focus on removing unused code and improving documentation while maintaining existing functionality. The auth module was significantly reduced by removing redundant code, and similar cleanup was applied to shared models. The build script was adjusted to reflect currently used directories.
- Reformatted `update_session_context` call for better readability.
- Moved context‑change (type 4) handling to a later stage in the processing pipeline and removed early return, ensuring proper flow.
- Adjusted deduplication logic formatting and clarified condition.
- Restored saving of user messages after context handling, preserving message history.
- Added detailed logging of the LLM prompt for debugging.
- Simplified JSON extraction for `message_type` and applied minor whitespace clean‑ups.
- Overall code refactor improves maintainability and corrects context‑change handling behavior.
- Deduplicate consecutive messages with same role in conversation history
- Add n_predict configuration option for LLM server
- Prevent duplicate message storage in session manager
- Update announcement schedule timing from 37 to 55 minutes
- Add default n_predict value in default bot config
- Refactor cron matching to use individual variables for each time component with additional debug logging
- Replace SETEX with atomic SET NX EX for job locking in Redis
- Add better error handling and logging for job execution tracking
- Skip execution if Redis is unavailable or job is already held
- Add verbose flag to LLM server startup command for better logging
- Added `trace!` logging in `bot_memory.rs` to record retrieved memory values for easier debugging.
- Refactored `BotOrchestrator` in `bot/mod.rs`:
- Removed duplicate session save block and consolidated message persistence.
- Replaced low‑level LLM streaming with a structured `UserMessage` and `stream_response` workflow, improving error handling and readability.
- Updated configuration loading in `config/mod.rs`:
- Imported `get_default_bot` and enhanced `get_config` to fall back to the default bot configuration when the primary query fails.
- Established a fresh DB connection for the fallback path to avoid borrowing issues.
- Updated `BootstrapManager` to use `AppConfig::from_env().expect(...)` and `AppConfig::from_database(...).expect(...)` ensuring failures are explicit rather than silently ignored.
- Refactored error propagation in bootstrap flow to use `?` where appropriate, improving reliability of configuration loading.
- Added import of `llm_models` in `bot` module and introduced `ConfigManager` usage to fetch the LLM model identifier at runtime.
- Integrated dynamic LLM model handler selection via `llm_models::get_handler(&model)`.
- Replaced static environment variable retrieval for embedding configuration with runtime
Introduce a new `cancel_job` method in the `LLMProvider` trait to allow cancellation of ongoing LLM tasks. Implement no-op versions for OpenAI, Anthropic, and Mock providers. Update the WebSocket handler to invoke job cancellation when a session closes, ensuring better resource management and preventing orphaned tasks. Also, fix unused variable warning in `add_suggestion.rs`.
- Strip content up to the “final<|message|>” token in OpenAI responses.
- Replace the text‑based connection‑status indicator with a small
flashing circle.
- Simplify updateConnectionStatus to take only the status argument.
- Remove special handling of the initial assistant message and
streamline empty‑state removal.
- Clean up stray blank lines in the announcement template.