Remove unnecessary async spawn in TALK handling and use `try_send` on
the WebSocket channel. Acquire `response_channels` with `try_lock` and
spawn an async task only when falling back to the web adapter. Clean up
debug logs and add missing `env` import. Also delete an extra blank line
in the announcement start script.
- Strip content up to the “final<|message|>” token in OpenAI responses.
- Replace the text‑based connection‑status indicator with a small
flashing circle.
- Simplify updateConnectionStatus to take only the status argument.
- Remove special handling of the initial assistant message and
streamline empty‑state removal.
- Clean up stray blank lines in the announcement template.
- Execute GET requests in a dedicated thread with its own Tokio runtime,
add timeout handling and clearer error messages.
- Tighten `is_safe_path` checks and simplify HTTP/S3 logic.
- Change `llm_keyword` to accept `Arc<AppState>`, add prompt builder,
run LLM generation in an isolated thread with timeout.
- Update keyword registration call in `basic/mod.rs`.
- Convert template script to use `let` declarations and return a
boolean.
- Introduce connection‑status indicator in the web UI with styles,
automatic reconnection attempts, and proper WS/WSS handling for voice.
- Replace async task spawning with `block_in_place` to simplify GET
handling
- Add detailed safety checks for file paths and organization prefixes
- Introduce timeout and keep‑alive settings for HTTP client
- Improve S3 bucket access with existence check, timeouts, and richer
logging
- Switch tracing logs to debug and add warning logs where appropriate
- Update announcement template to retrieve a PDF, generate a resume via
LLM, and set context for subsequent queries.
- Extract LLM generation into `execute_llm_generation` and simplify
keyword handling.
- Prepend system prompt and session context to LLM prompts in
`BotOrchestrator`.
- Parse incoming WebSocket messages as JSON and use the `content` field.
- Add async `get_session_context` and stop injecting Redis context into
conversation history.
- Change default LLM URL to `http://48.217.66.81:8080` throughout the
project.
- Use the existing DB pool instead of creating a separate custom
connection.
- Update `start.bas` to call LLM and set a new context string.
- Refactor web client message handling: separate event processing,
improve streaming logic, reset streaming state on thinking end, and
remove unused test functions.
- Use `tokio::spawn` to run Redis SET for `SET_CONTEXT` in a background
task with detailed tracing.
- Move context retrieval from `BotOrchestrator` to `SessionManager`,
inserting it as a system message in conversation history.
- Remove redundant Redis fetch logic from `BotOrchestrator`.
- Update `DEV.md` to install `valkey-cli`, reorder cargo tools, and
adjust apt commands.
- Add a `SET_CONTEXT "azul bolinha"` example to the announcements
template.