Include retrieval and passing of `llm-key` from configuration to LLM provider methods for secure authentication. Also refine role naming in compact prompts and remove unused logging import.
Include model parameter in LLM provider calls across automation, bot, and keyword modules to ensure correct model selection based on configuration. This improves flexibility and consistency in LLM usage.
- Removed unused `id` and `app_state` fields from `ChatPanel`; updated constructor to accept but ignore the state, reducing memory footprint.
- Switched database access in `ChatPanel` from a raw `Mutex` lock to a connection pool (`app_state.conn.get()`), improving concurrency and error handling.
- Reordered and cleaned up imports in `status_panel.rs` and formatted struct fields for readability.
- Updated VS Code launch configuration to pass `--noui` argument, enabling headless mode for debugging.
- Bumped several crate versions in `Cargo.lock` (e.g., `bitflags` to 2.10.0, `syn` to 2.0.108, `cookie` to 0.16.2) and added the new `ashpd` dependency, aligning the project with latest library releases.
- Simplified build_llm_prompt by removing redundant formatting
- Added info logging for LLM model and processed content
- Updated README with development philosophy note
- Adjusted announcement schedule timing from 55 to 59 minutes past the hour
- Add 'keyword' to LLM processing log message for better context
- Replace simple string replace with regex for removing <think> tags in DeepseekR3 model
- The changes provide more precise logging and more robust content processing
- Added migration 6.0.6 to enforce a unique constraint on `(bot_id, kind, param)` in `system_automations`, preventing “no unique or exclusion constraint matching the ON CONFLICT specification” errors, and created a supporting index.
- Added migration 6.0.7 to replace the `clicks` table with a correctly defined primary key and a unique `(campaign_id, email)` constraint, satisfying Diesel
Added support for selecting an LLM model based on configuration and processing the raw response. The execute_llm_generation function now:
1. Fetches the configured model using ConfigManager
2. Gets the appropriate handler for the model
3. Processes the raw response through the handler before returning
This provides more flexibility in model selection and allows for model-specific response handling.
- Remove `scripts_dir` from `AutomationService` and its constructor, as it was unused.
- Drop LLM and embedding server readiness checks; the service now only schedules periodic tasks.
- Increase the health‑check interval from 5 seconds to 15 seconds for reduced load.
- Streamline the LLM keyword prompt to a concise `"User: {}"` format, removing verbose boilerplate.
- Remove unnecessary logging and LLM cache handling code from the bot orchestrator, cleaning up unused environment variable checks and cache queries.
- Derive bot_id from BOT_GUID env var
- Guard concurrent runs with Redis
- Read CACHE_URL for Redis connection
- Extend bot memory keyword to accept comma as separator
- Increase LLM timeouts to 180s (local and legacy)
- Update templates to use bot memory (GET_BOT_MEMORY/SET_BOT_MEMORY)
- Fix start script path to announcements.gbai
- Execute GET requests in a dedicated thread with its own Tokio runtime,
add timeout handling and clearer error messages.
- Tighten `is_safe_path` checks and simplify HTTP/S3 logic.
- Change `llm_keyword` to accept `Arc<AppState>`, add prompt builder,
run LLM generation in an isolated thread with timeout.
- Update keyword registration call in `basic/mod.rs`.
- Convert template script to use `let` declarations and return a
boolean.
- Introduce connection‑status indicator in the web UI with styles,
automatic reconnection attempts, and proper WS/WSS handling for voice.
- Extract LLM generation into `execute_llm_generation` and simplify
keyword handling.
- Prepend system prompt and session context to LLM prompts in
`BotOrchestrator`.
- Parse incoming WebSocket messages as JSON and use the `content` field.
- Add async `get_session_context` and stop injecting Redis context into
conversation history.
- Change default LLM URL to `http://48.217.66.81:8080` throughout the
project.
- Use the existing DB pool instead of creating a separate custom
connection.
- Update `start.bas` to call LLM and set a new context string.
- Refactor web client message handling: separate event processing,
improve streaming logic, reset streaming state on thinking end, and
remove unused test functions.
- Migrate core services to store Arc<AppState> and use locks
- Centralize state in AppState with Arc-wrapped managers
- Update handlers to pass Arc<AppState> via web::Data
- Add Default for AppState and initialize components in main
- Update debug.json program path from gbserver to botserver