Commit graph

12 commits

Author SHA1 Message Date
41d7377ab7 feat: implement message deduplication and LLM config improvements
- Deduplicate consecutive messages with same role in conversation history
- Add n_predict configuration option for LLM server
- Prevent duplicate message storage in session manager
- Update announcement schedule timing from 37 to 55 minutes
- Add default n_predict value in default bot config
2025-11-03 14:13:22 -03:00
53b49ba616 feat(automation): improve cron matching and job locking
- Refactor cron matching to use individual variables for each time component with additional debug logging
- Replace SETEX with atomic SET NX EX for job locking in Redis
- Add better error handling and logging for job execution tracking
- Skip execution if Redis is unavailable or job is already held
- Add verbose flag to LLM server startup command for better logging
2025-11-03 13:52:28 -03:00
b5e1501454 feat: add trace logging, refactor bot streaming, add config fallback
- Added `trace!` logging in `bot_memory.rs` to record retrieved memory values for easier debugging.
- Refactored `BotOrchestrator` in `bot/mod.rs`:
  - Removed duplicate session save block and consolidated message persistence.
  - Replaced low‑level LLM streaming with a structured `UserMessage` and `stream_response` workflow, improving error handling and readability.
- Updated configuration loading in `config/mod.rs`:
  - Imported `get_default_bot` and enhanced `get_config` to fall back to the default bot configuration when the primary query fails.
  - Established a fresh DB connection for the fallback path to avoid borrowing issues.
2025-11-03 10:13:39 -03:00
fea1574518 feat: enforce config load errors and add dynamic LLM model handling
- Updated `BootstrapManager` to use `AppConfig::from_env().expect(...)` and `AppConfig::from_database(...).expect(...)` ensuring failures are explicit rather than silently ignored.
- Refactored error propagation in bootstrap flow to use `?` where appropriate, improving reliability of configuration loading.
- Added import of `llm_models` in `bot` module and introduced `ConfigManager` usage to fetch the LLM model identifier at runtime.
- Integrated dynamic LLM model handler selection via `llm_models::get_handler(&model)`.
- Replaced static environment variable retrieval for embedding configuration with runtime
2025-11-02 18:36:21 -03:00
e7fa2f7bdb feat(llm): add cancel_job support and integrate session cleanup
Introduce a new `cancel_job` method in the `LLMProvider` trait to allow cancellation of ongoing LLM tasks. Implement no-op versions for OpenAI, Anthropic, and Mock providers. Update the WebSocket handler to invoke job cancellation when a session closes, ensuring better resource management and preventing orphaned tasks. Also, fix unused variable warning in `add_suggestion.rs`.
2025-11-02 15:13:47 -03:00
648e7f48f9 Refactor LLM parsing and overhaul connection UI
- Strip content up to the “final<|message|>” token in OpenAI responses.
- Replace the text‑based connection‑status indicator with a small
  flashing circle.
- Simplify updateConnectionStatus to take only the status argument.
- Remove special handling of the initial assistant message and
  streamline empty‑state removal.
- Clean up stray blank lines in the announcement template.
2025-10-15 22:24:04 -03:00
712266a9f8 - Mew LLM provider. 2025-10-12 20:54:42 -03:00
283774aa0f - Remove all compilation errors. 2025-10-11 12:29:03 -03:00
8a9cd104d6 - Warning removal and restore of old code. 2025-10-07 07:16:03 -03:00
c0c470e9aa - Fixing compilation errors. 2025-10-06 20:06:43 -03:00
66e2ed2ad1 - Just more errors to fix. 2025-10-06 19:12:13 -03:00
9749893dd0 Migration to Rust and free from Azure. 2025-10-06 10:30:17 -03:00