Commit graph

12 commits

Author SHA1 Message Date
9c36aa10fa - More automation from start to web, user sessions. 2025-10-20 23:32:49 -03:00
93675c28d9 Add scroll-to-bottom button and context usage indicator with styling improvements 2025-10-17 15:12:19 -03:00
648e7f48f9 Refactor LLM parsing and overhaul connection UI
- Strip content up to the “final<|message|>” token in OpenAI responses.
- Replace the text‑based connection‑status indicator with a small
  flashing circle.
- Simplify updateConnectionStatus to take only the status argument.
- Remove special handling of the initial assistant message and
  streamline empty‑state removal.
- Clean up stray blank lines in the announcement template.
2025-10-15 22:24:04 -03:00
ff89298e61 Refactor async GET and LLM, add connection UI
- Execute GET requests in a dedicated thread with its own Tokio runtime,
  add timeout handling and clearer error messages.
- Tighten `is_safe_path` checks and simplify HTTP/S3 logic.
- Change `llm_keyword` to accept `Arc<AppState>`, add prompt builder,
  run LLM generation in an isolated thread with timeout.
- Update keyword registration call in `basic/mod.rs`.
- Convert template script to use `let` declarations and return a
  boolean.
- Introduce connection‑status indicator in the web UI with styles,
  automatic reconnection attempts, and proper WS/WSS handling for voice.
2025-10-15 21:18:01 -03:00
a293c0e083 - Fix on the web presentation. 2025-10-15 10:37:04 -03:00
f626793d77 Refactor LLM flow, add prompts, fix UI streaming
- Extract LLM generation into `execute_llm_generation` and simplify
  keyword handling.
- Prepend system prompt and session context to LLM prompts in
  `BotOrchestrator`.
- Parse incoming WebSocket messages as JSON and use the `content` field.
- Add async `get_session_context` and stop injecting Redis context into
  conversation history.
- Change default LLM URL to `http://48.217.66.81:8080` throughout the
  project.
- Use the existing DB pool instead of creating a separate custom
  connection.
- Update `start.bas` to call LLM and set a new context string.
- Refactor web client message handling: separate event processing,
  improve streaming logic, reset streaming state on thinking end, and
  remove unused test functions.
2025-10-15 01:14:37 -03:00
573437cb6f - Database schema added. 2025-10-14 16:34:34 -03:00
37d8fab617 Refactor session handling and auth flow 2025-10-14 14:51:49 -03:00
8f96cd1015 Refactor server code and add auth API fixes 2025-10-14 13:51:27 -03:00
3aeb3ebc70 - New features for start.bas 2025-10-13 17:43:03 -03:00
a7c74b837e Enhance streaming with events and warning API
- Introduce event-driven streaming with thinking_start, thinking_end,
  and warn events; skip sending analysis content to clients
- Add /api/warn endpoint to dispatch warnings for sessions and channels;
  web UI displays alerts
- Emit session_start/session_end events over WebSocket and instrument
  logging throughout orchestration
- Update web client: show thinking indicator and warning banners; switch
  LiveKit client URL to CDN
- Extend BotOrchestrator with send_event and send_warning, expand
  session/tool workflow
- Improve REST endpoints for sessions/history with better logging and
  error handling
- Update docs and prompts: DEV.md usage note; adjust dev build_prompt
  script
2025-10-13 00:31:08 -03:00
e14908fc0b Migrate to web/index.html and OpenAI client
- Load index.html from web/index.html instead of templates/static
- Initialize OpenAIClient with an empty key and local endpoint
- Remove the old static/index.html file
2025-10-12 17:41:41 -03:00
Renamed from static/index.html (Browse further)