Commit graph

29 commits

Author SHA1 Message Date
6e36384f45 feat: simplify message content in hear_talk function
Removed the redundant "I heard: " prefix from the message content in the execute_talk function. The message is now stored as-is without additional formatting, making it more straightforward and consistent with other message handling.
2025-11-02 12:28:07 -03:00
a5d3343c09 feat(ui): update design system with new theme and visual enhancements
- Replace Orbitron/Inter fonts with Space Grotesk/JetBrains Mono
- Implement new color scheme with primary, secondary, and accent variables
- Add radial gradient background with animated glow effects
- Introduce grain overlay for texture
- Redesign sidebar with improved transitions and blur effects
- Update empty state styling and remove neon effects
- Optimize layout spacing and shadows
- Streamline API call in chat initialization

The changes modernize the UI with a more cohesive design language, better visual hierarchy, and improved readability while maintaining functionality. The new theme features a dark color palette with vibrant accents and subtle textures for depth.
2025-11-02 11:13:40 -03:00
9673f41195 feat: add styled suggestion buttons and move them from messages to footer
Added new CSS classes for suggestion buttons and container, moving them from message content to footer. Removed inline styles in favor of CSS classes for better maintainability. The suggestions now appear at the top of the footer with consistent styling and hover effects.
2025-11-02 11:01:36 -03:00
9010396a87 feat(ui): improve suggestion handling and input clearing
Enhanced suggestion handling to include empty state in message selection and improved UX by:
- Clearing input field after selecting a suggestion
- Adding selected suggestion as user message to chat
- Maintaining all existing functionality while simplifying the flow

Changes ensure more intuitive user interaction with suggestions while keeping the chat interface clean.
2025-11-02 10:51:42 -03:00
888bfc859d feat: refactor Redis operations to synchronous in add_suggestion
Changed async Redis operations to synchronous in add_suggestion_keyword function. Removed unnecessary async/await and tokio::spawn since the operations are now blocking. This simplifies the code while maintaining the same functionality of storing suggestions and context state in Redis. Error handling remains robust with proper early returns.
2025-11-02 10:45:57 -03:00
acca68493f feat: simplify Redis key format and clean up code
- Remove comments from launch.json configuration
- Simplify Redis key format in set_current_context_keyword by removing context_name
- Remove unused /api/start endpoint and related session start logic
- Change log level from info to trace for config sync messages
- Add trace logging to drive_monitor module

The changes focus on code cleanup and simplification, particularly around session handling and logging. The Redis key format was simplified as the context_name was deemed unnecessary for uniqueness.
2025-11-02 07:50:57 -03:00
0342e1cac9 refactor(state): rename resource clients and improve keyword syntax
Updated references from `redis_client`, `s3_client`, and `custom_conn` to unified names `cache`, `drive`, and `conn` for consistency across modules. Adjusted `add_suggestion_keyword` to use clearer parameter naming and enhanced custom syntax registration for better readability and maintainability.
2025-11-01 20:53:45 -03:00
fda0b0c9e8 feat(auth): add bot name lookup and suggestion support in bot responses
Implemented a new `bot_from_name` method in `AuthService` to retrieve a bot's UUID by its name, enabling explicit bot selection via a `bot_name` query parameter in the authentication endpoint. Updated `auth_handler` to prioritize this parameter, fall back to the first active bot when necessary, and handle cases where no active bots exist.

Extended `BotOrchestrator` to import the `Suggestion` model and fetch suggestion data from Redis for each user session. Integrated these suggestions into the `BotResponse` payload, ensuring clients receive relevant suggestions alongside bot messages.

These changes improve bot selection flexibility and enrich the response data with contextual suggestions.
2025-11-01 17:56:53 -03:00
6f59cdaab6 feat(auth): add suggestion support and clean up code
- Added new ADD_SUGGESTION keyword handler to support sending suggestions in responses
- Removed unused env import in hear_talk module
- Simplified bot_id assignment to use static string
- Added suggestions field to BotResponse struct
- Improved SET_CONTEXT keyword to take both name and value parameters
- Fixed whitespace in auth handler
- Enhanced error handling for suggestion sending

The changes improve the suggestion system functionality while cleaning up unused code and standardizing response handling.
2025-10-31 20:55:13 -03:00
4337bd4229 feat(suggestions): enhance suggestion handling and context management in WebSocket 2025-10-30 14:53:06 -03:00
d278b95fac feat(keywords, ui): add suggestion module and UI for context switching
Added new `add_suggestion` module to support suggestion handling logic.
Updated `index.html` to include a dynamic suggestions container that fetches and displays context suggestions.
This improves user experience by enabling quick context changes through interactive suggestion buttons.
2025-10-29 18:54:33 -03:00
23c3d7a1e0 Refactor script tags in HTML files to remove type="module" and update welcome page with authentication logic and styles 2025-10-26 08:07:14 -03:00
dfe7e4e4b6 Add About and Login pages with responsive design and user authentication
- Created a new About page (index.html) detailing the BotServer platform, its features, and technology stack.
- Developed a Login page (login.html) with sign-in and sign-up functionality, including form validation and user feedback messages.
- Removed the empty style.css file as it is no longer needed.
2025-10-26 00:02:19 -03:00
a0c367d79b Revert "Implement token-based context usage in chat UI"
This reverts commit 82aa3e8d36.
2025-10-24 11:17:22 -03:00
82aa3e8d36 Implement token-based context usage in chat UI
- Replace simple message count with token-based calculation
- Add token estimation function (4 chars ≈ 1 token)
- Set MAX_TOKENS to 5000 and MIN_DISPLAY_PERCENTAGE to 20
- Update context usage display to show token count percentage
- Track tokens for both user and assistant messages
- Handle server-provided context usage as ratio of MAX_TOKENS
2025-10-23 16:33:23 -03:00
01264411ba - New web assets for 6.0.5. 2025-10-20 23:44:47 -03:00
9c36aa10fa - More automation from start to web, user sessions. 2025-10-20 23:32:49 -03:00
93675c28d9 Add scroll-to-bottom button and context usage indicator with styling improvements 2025-10-17 15:12:19 -03:00
648e7f48f9 Refactor LLM parsing and overhaul connection UI
- Strip content up to the “final<|message|>” token in OpenAI responses.
- Replace the text‑based connection‑status indicator with a small
  flashing circle.
- Simplify updateConnectionStatus to take only the status argument.
- Remove special handling of the initial assistant message and
  streamline empty‑state removal.
- Clean up stray blank lines in the announcement template.
2025-10-15 22:24:04 -03:00
ff89298e61 Refactor async GET and LLM, add connection UI
- Execute GET requests in a dedicated thread with its own Tokio runtime,
  add timeout handling and clearer error messages.
- Tighten `is_safe_path` checks and simplify HTTP/S3 logic.
- Change `llm_keyword` to accept `Arc<AppState>`, add prompt builder,
  run LLM generation in an isolated thread with timeout.
- Update keyword registration call in `basic/mod.rs`.
- Convert template script to use `let` declarations and return a
  boolean.
- Introduce connection‑status indicator in the web UI with styles,
  automatic reconnection attempts, and proper WS/WSS handling for voice.
2025-10-15 21:18:01 -03:00
a293c0e083 - Fix on the web presentation. 2025-10-15 10:37:04 -03:00
f626793d77 Refactor LLM flow, add prompts, fix UI streaming
- Extract LLM generation into `execute_llm_generation` and simplify
  keyword handling.
- Prepend system prompt and session context to LLM prompts in
  `BotOrchestrator`.
- Parse incoming WebSocket messages as JSON and use the `content` field.
- Add async `get_session_context` and stop injecting Redis context into
  conversation history.
- Change default LLM URL to `http://48.217.66.81:8080` throughout the
  project.
- Use the existing DB pool instead of creating a separate custom
  connection.
- Update `start.bas` to call LLM and set a new context string.
- Refactor web client message handling: separate event processing,
  improve streaming logic, reset streaming state on thinking end, and
  remove unused test functions.
2025-10-15 01:14:37 -03:00
573437cb6f - Database schema added. 2025-10-14 16:34:34 -03:00
37d8fab617 Refactor session handling and auth flow 2025-10-14 14:51:49 -03:00
51c9aea63b Add botserver prompt, auth template, and CSS 2025-10-14 13:51:54 -03:00
8f96cd1015 Refactor server code and add auth API fixes 2025-10-14 13:51:27 -03:00
3aeb3ebc70 - New features for start.bas 2025-10-13 17:43:03 -03:00
a7c74b837e Enhance streaming with events and warning API
- Introduce event-driven streaming with thinking_start, thinking_end,
  and warn events; skip sending analysis content to clients
- Add /api/warn endpoint to dispatch warnings for sessions and channels;
  web UI displays alerts
- Emit session_start/session_end events over WebSocket and instrument
  logging throughout orchestration
- Update web client: show thinking indicator and warning banners; switch
  LiveKit client URL to CDN
- Extend BotOrchestrator with send_event and send_warning, expand
  session/tool workflow
- Improve REST endpoints for sessions/history with better logging and
  error handling
- Update docs and prompts: DEV.md usage note; adjust dev build_prompt
  script
2025-10-13 00:31:08 -03:00
e14908fc0b Migrate to web/index.html and OpenAI client
- Load index.html from web/index.html instead of templates/static
- Initialize OpenAIClient with an empty key and local endpoint
- Remove the old static/index.html file
2025-10-12 17:41:41 -03:00