Refactored the compact_prompt_for_bots function to use structured JSON messages instead of plain text formatting. Removed unused execute_compact_prompt method and related code from automation service as the functionality is now handled elsewhere. The changes include:
- Using serde_json to structure messages for LLM
- Improved error handling and fallback mechanism
- Cleaned up obsolete compact prompt execution code
Added parse_messages method to handle structured prompt input for OpenAI API. The method converts human/bot/compact prefixes to appropriate OpenAI roles (user/assistant/system) and properly formats multi-line messages. This enables more complex conversation structures in prompts while maintaining compatibility with the OpenAI API format.
Removed the direct prompt-to-message conversion in generate and generate_stream methods, replacing it with the new parse_messages utility. Also reorganized the impl blocks for better code organization.
- Renamed `execute_compact_prompt` to `compact_prompt_for_bots` and simplified logic
- Removed redundant comments and empty lines in test files
- Consolidated prompt compaction threshold handling
- Cleaned up UI logging implementation by removing unnecessary whitespace
- Improved code organization in ui_tree module
The changes focus on code quality improvements, removing clutter, and making the prompt compaction logic more straightforward. Test files were cleaned up to be more concise.
Updated the 6.0.4 migration to use `http://localhost:8081/v1` for the default OpenAI model configurations (gpt‑4 and gpt‑3.5‑turbo) and the local embed service. Adjusted `OpenAIClient` to default to the same localhost base URL instead of the production OpenAI API.
Reorganized imports and module ordering in `src/main.rs` (moved `mod llm`, `mod nvidia`, and `BotOrchestrator` import), cleaned up formatting, and removed unused imports. These changes streamline development by directing LLM calls to a local server and improve code readability.
- Added initial 30s delay to compact prompt scheduler
- Implemented async LLM summarization for conversation history
- Reduced lock contention by minimizing critical sections
- Added fallback to original text if summarization fails
- Updated README with guidance for failed requirements
- Added new `summarize` method to LLMProvider trait
- Improved session manager query with proper DSL usage
The changes optimize the prompt compaction process by:
1. Reducing lock contention through better resource management
2. Adding LLM-based summarization for better conversation compression
3. Making the system more resilient with proper error handling
4. Improving documentation for development practices
- Simplified auth module by removing unused imports and code
- Cleaned up shared models by removing unused structs (Organization, User, Bot, etc.)
- Updated add-req.sh to comment out unused directories
- Modified LLM fallback strategy in README with additional notes about model behaviors
The changes focus on removing unused code and improving documentation while maintaining existing functionality. The auth module was significantly reduced by removing redundant code, and similar cleanup was applied to shared models. The build script was adjusted to reflect currently used directories.
- Updated `BootstrapManager` to use `AppConfig::from_env().expect(...)` and `AppConfig::from_database(...).expect(...)` ensuring failures are explicit rather than silently ignored.
- Refactored error propagation in bootstrap flow to use `?` where appropriate, improving reliability of configuration loading.
- Added import of `llm_models` in `bot` module and introduced `ConfigManager` usage to fetch the LLM model identifier at runtime.
- Integrated dynamic LLM model handler selection via `llm_models::get_handler(&model)`.
- Replaced static environment variable retrieval for embedding configuration with runtime
Introduce a new `cancel_job` method in the `LLMProvider` trait to allow cancellation of ongoing LLM tasks. Implement no-op versions for OpenAI, Anthropic, and Mock providers. Update the WebSocket handler to invoke job cancellation when a session closes, ensuring better resource management and preventing orphaned tasks. Also, fix unused variable warning in `add_suggestion.rs`.
- Strip content up to the “final<|message|>” token in OpenAI responses.
- Replace the text‑based connection‑status indicator with a small
flashing circle.
- Simplify updateConnectionStatus to take only the status argument.
- Remove special handling of the initial assistant message and
streamline empty‑state removal.
- Clean up stray blank lines in the announcement template.