- Increased schedule field size from bpchar(12) to bpchar(20) in database schema
- Reduced task checking interval from 60s to 5s for more responsive automation
- Improved error handling for schedule parsing and execution
- Added proper error logging for automation failures
- Changed automation execution to use bot_id instead of nil UUID
- Enhanced HEAR keyword functionality (partial diff shown)
Use `print!` with stdout flush for smoother in-place GPU/CPU/token progress updates in the bot module. Simplify context indicator logic in the web UI by always removing visibility class to streamline behavior.
Add the `cron` crate (v0.15.0) to Cargo.toml and Cargo.lock to enable scheduling capabilities.
Introduce a new `broadcast_theme_change` helper in `src/automation/mod.rs` that parses CSV theme data and pushes JSON theme update events to all active response channels.
Clean up unused imports in the automation module and add `ConfigManager` import for future configuration handling.
Update `add-req.sh` to adjust the list of processed directories (comment out `auth`, enable `basic`, `config`, `context`, and `drive_monitor`).
These changes lay groundwork for scheduled tasks and dynamic theme updates across the application.
Removed `IF NOT EXISTS` from the unique constraint in `system_automations` to ensure proper enforcement, and deleted the unused `floatLogo` click event listener to clean up UI behavior.
Added new configuration options for theme colors (green, yellow) and a custom logo URL to enhance branding and visual customization in announcement templates.
Updated the regex pattern in DeepseekR3Handler to use (?s) flag for dot-matches-newline behavior when removing <think> tags. Added comprehensive test case that verifies the handler correctly processes content with multiline think tags. Also made styling changes to the web interface, though the full diff was truncated.
Added interaction count tracking for sessions with Redis or in-memory fallback. Implemented conversation history replacement functionality to compact and update message history. The changes include:
- New AtomicUsize counter in SessionManager for interaction tracking
- increment_and_get_interaction_count method with Redis support
- replace_conversation_history to update and compact message history
- Maintains existing functionality while adding new features
- Added duplicate filtering for suggestions in both backend (Rust) and frontend (JavaScript)
- Changed announcement summary schedule from every 15 minutes to hourly at :37
- Simplified LLM prompt for document summarization
- Updated LLM server configuration with reduced GPU layers and increased context size
- Removed memory mapping and lock settings for LLM server
- Improved HTML formatting and added missing newline at EOF
- Added `trace!` logging in `bot_memory.rs` to record retrieved memory values for easier debugging.
- Refactored `BotOrchestrator` in `bot/mod.rs`:
- Removed duplicate session save block and consolidated message persistence.
- Replaced low‑level LLM streaming with a structured `UserMessage` and `stream_response` workflow, improving error handling and readability.
- Updated configuration loading in `config/mod.rs`:
- Imported `get_default_bot` and enhanced `get_config` to fall back to the default bot configuration when the primary query fails.
- Established a fresh DB connection for the fallback path to avoid borrowing issues.
- Remove redundant background and filter properties from logo-icon and assistant-avatar
- Remove redundant filter property from light theme logo-icon
- Invert assistant-avatar image in light theme for better visibility
- Move theme-toggle button further right (60px to 160px) to prevent overlap
- Added LLM server readiness check in AutomationService before starting tasks
- Renamed `user` parameter to `user_session` in execute_talk for clarity
- Updated BotResponse fields to use user_session data instead of hardcoded values
- Improved Redis key generation in execute_talk to use user_session fields
- Removed commented Redis code in set_current_context_keyword
The changes ensure proper initialization of automation tasks by checking LLM server availability first, and improve code clarity by using more descriptive variable names for user session data.
- Added UNIQUE constraint on system_automations (bot_id, kind, param) to prevent duplicate automations
- Refactored execute_action to accept full Automation struct instead of just param
- Simplified bot name resolution by using automation.bot_id directly
- Improved error handling in action execution with proper error propagation
- Removed redundant bot name lookup logic by leveraging existing bot_id
Removed the redundant "I heard: " prefix from the message content in the execute_talk function. The message is now stored as-is without additional formatting, making it more straightforward and consistent with other message handling.
- Replace Orbitron/Inter fonts with Space Grotesk/JetBrains Mono
- Implement new color scheme with primary, secondary, and accent variables
- Add radial gradient background with animated glow effects
- Introduce grain overlay for texture
- Redesign sidebar with improved transitions and blur effects
- Update empty state styling and remove neon effects
- Optimize layout spacing and shadows
- Streamline API call in chat initialization
The changes modernize the UI with a more cohesive design language, better visual hierarchy, and improved readability while maintaining functionality. The new theme features a dark color palette with vibrant accents and subtle textures for depth.
Added new CSS classes for suggestion buttons and container, moving them from message content to footer. Removed inline styles in favor of CSS classes for better maintainability. The suggestions now appear at the top of the footer with consistent styling and hover effects.
Enhanced suggestion handling to include empty state in message selection and improved UX by:
- Clearing input field after selecting a suggestion
- Adding selected suggestion as user message to chat
- Maintaining all existing functionality while simplifying the flow
Changes ensure more intuitive user interaction with suggestions while keeping the chat interface clean.
Changed async Redis operations to synchronous in add_suggestion_keyword function. Removed unnecessary async/await and tokio::spawn since the operations are now blocking. This simplifies the code while maintaining the same functionality of storing suggestions and context state in Redis. Error handling remains robust with proper early returns.
- Remove comments from launch.json configuration
- Simplify Redis key format in set_current_context_keyword by removing context_name
- Remove unused /api/start endpoint and related session start logic
- Change log level from info to trace for config sync messages
- Add trace logging to drive_monitor module
The changes focus on code cleanup and simplification, particularly around session handling and logging. The Redis key format was simplified as the context_name was deemed unnecessary for uniqueness.
Updated references from `redis_client`, `s3_client`, and `custom_conn` to unified names `cache`, `drive`, and `conn` for consistency across modules. Adjusted `add_suggestion_keyword` to use clearer parameter naming and enhanced custom syntax registration for better readability and maintainability.
Implemented a new `bot_from_name` method in `AuthService` to retrieve a bot's UUID by its name, enabling explicit bot selection via a `bot_name` query parameter in the authentication endpoint. Updated `auth_handler` to prioritize this parameter, fall back to the first active bot when necessary, and handle cases where no active bots exist.
Extended `BotOrchestrator` to import the `Suggestion` model and fetch suggestion data from Redis for each user session. Integrated these suggestions into the `BotResponse` payload, ensuring clients receive relevant suggestions alongside bot messages.
These changes improve bot selection flexibility and enrich the response data with contextual suggestions.
- Added new ADD_SUGGESTION keyword handler to support sending suggestions in responses
- Removed unused env import in hear_talk module
- Simplified bot_id assignment to use static string
- Added suggestions field to BotResponse struct
- Improved SET_CONTEXT keyword to take both name and value parameters
- Fixed whitespace in auth handler
- Enhanced error handling for suggestion sending
The changes improve the suggestion system functionality while cleaning up unused code and standardizing response handling.
Added new `add_suggestion` module to support suggestion handling logic.
Updated `index.html` to include a dynamic suggestions container that fetches and displays context suggestions.
This improves user experience by enabling quick context changes through interactive suggestion buttons.
- Created a new About page (index.html) detailing the BotServer platform, its features, and technology stack.
- Developed a Login page (login.html) with sign-in and sign-up functionality, including form validation and user feedback messages.
- Removed the empty style.css file as it is no longer needed.
- Replace simple message count with token-based calculation
- Add token estimation function (4 chars ≈ 1 token)
- Set MAX_TOKENS to 5000 and MIN_DISPLAY_PERCENTAGE to 20
- Update context usage display to show token count percentage
- Track tokens for both user and assistant messages
- Handle server-provided context usage as ratio of MAX_TOKENS
- Strip content up to the “final<|message|>” token in OpenAI responses.
- Replace the text‑based connection‑status indicator with a small
flashing circle.
- Simplify updateConnectionStatus to take only the status argument.
- Remove special handling of the initial assistant message and
streamline empty‑state removal.
- Clean up stray blank lines in the announcement template.
- Execute GET requests in a dedicated thread with its own Tokio runtime,
add timeout handling and clearer error messages.
- Tighten `is_safe_path` checks and simplify HTTP/S3 logic.
- Change `llm_keyword` to accept `Arc<AppState>`, add prompt builder,
run LLM generation in an isolated thread with timeout.
- Update keyword registration call in `basic/mod.rs`.
- Convert template script to use `let` declarations and return a
boolean.
- Introduce connection‑status indicator in the web UI with styles,
automatic reconnection attempts, and proper WS/WSS handling for voice.
- Extract LLM generation into `execute_llm_generation` and simplify
keyword handling.
- Prepend system prompt and session context to LLM prompts in
`BotOrchestrator`.
- Parse incoming WebSocket messages as JSON and use the `content` field.
- Add async `get_session_context` and stop injecting Redis context into
conversation history.
- Change default LLM URL to `http://48.217.66.81:8080` throughout the
project.
- Use the existing DB pool instead of creating a separate custom
connection.
- Update `start.bas` to call LLM and set a new context string.
- Refactor web client message handling: separate event processing,
improve streaming logic, reset streaming state on thinking end, and
remove unused test functions.
- Introduce event-driven streaming with thinking_start, thinking_end,
and warn events; skip sending analysis content to clients
- Add /api/warn endpoint to dispatch warnings for sessions and channels;
web UI displays alerts
- Emit session_start/session_end events over WebSocket and instrument
logging throughout orchestration
- Update web client: show thinking indicator and warning banners; switch
LiveKit client URL to CDN
- Extend BotOrchestrator with send_event and send_warning, expand
session/tool workflow
- Improve REST endpoints for sessions/history with better logging and
error handling
- Update docs and prompts: DEV.md usage note; adjust dev build_prompt
script
- Load index.html from web/index.html instead of templates/static
- Initialize OpenAIClient with an empty key and local endpoint
- Remove the old static/index.html file