The large `process_message` function was deleted from `src/bot/mod.rs`. Its responsibilities have been migrated to newer, more modular handlers, eliminating dead code and simplifying the BotOrchestrator. This refactor reduces complexity, improves maintainability, and aligns the codebase with the updated message processing architecture.
- Reformatted `update_session_context` call for better readability.
- Moved context‑change (type 4) handling to a later stage in the processing pipeline and removed early return, ensuring proper flow.
- Adjusted deduplication logic formatting and clarified condition.
- Restored saving of user messages after context handling, preserving message history.
- Added detailed logging of the LLM prompt for debugging.
- Simplified JSON extraction for `message_type` and applied minor whitespace clean‑ups.
- Overall code refactor improves maintainability and corrects context‑change handling behavior.
- Deduplicate consecutive messages with same role in conversation history
- Add n_predict configuration option for LLM server
- Prevent duplicate message storage in session manager
- Update announcement schedule timing from 37 to 55 minutes
- Add default n_predict value in default bot config
- Added duplicate filtering for suggestions in both backend (Rust) and frontend (JavaScript)
- Changed announcement summary schedule from every 15 minutes to hourly at :37
- Simplified LLM prompt for document summarization
- Updated LLM server configuration with reduced GPU layers and increased context size
- Removed memory mapping and lock settings for LLM server
- Improved HTML formatting and added missing newline at EOF
Enhances BotOrchestrator by integrating optional semantic caching via LangCache for faster LLM responses. Also refactors message saving to occur before and after direct mode handling, and simplifies context change logic for better clarity and flow.
- Added `trace!` logging in `bot_memory.rs` to record retrieved memory values for easier debugging.
- Refactored `BotOrchestrator` in `bot/mod.rs`:
- Removed duplicate session save block and consolidated message persistence.
- Replaced low‑level LLM streaming with a structured `UserMessage` and `stream_response` workflow, improving error handling and readability.
- Updated configuration loading in `config/mod.rs`:
- Imported `get_default_bot` and enhanced `get_config` to fall back to the default bot configuration when the primary query fails.
- Established a fresh DB connection for the fallback path to avoid borrowing issues.
- Remove `scripts_dir` from `AutomationService` and its constructor, as it was unused.
- Drop LLM and embedding server readiness checks; the service now only schedules periodic tasks.
- Increase the health‑check interval from 5 seconds to 15 seconds for reduced load.
- Streamline the LLM keyword prompt to a concise `"User: {}"` format, removing verbose boilerplate.
- Remove unnecessary logging and LLM cache handling code from the bot orchestrator, cleaning up unused environment variable checks and cache queries.
- Added LLM server readiness check in AutomationService before starting tasks
- Renamed `user` parameter to `user_session` in execute_talk for clarity
- Updated BotResponse fields to use user_session data instead of hardcoded values
- Improved Redis key generation in execute_talk to use user_session fields
- Removed commented Redis code in set_current_context_keyword
The changes ensure proper initialization of automation tasks by checking LLM server availability first, and improve code clarity by using more descriptive variable names for user session data.
- Updated `BootstrapManager` to use `AppConfig::from_env().expect(...)` and `AppConfig::from_database(...).expect(...)` ensuring failures are explicit rather than silently ignored.
- Refactored error propagation in bootstrap flow to use `?` where appropriate, improving reliability of configuration loading.
- Added import of `llm_models` in `bot` module and introduced `ConfigManager` usage to fetch the LLM model identifier at runtime.
- Integrated dynamic LLM model handler selection via `llm_models::get_handler(&model)`.
- Replaced static environment variable retrieval for embedding configuration with runtime
Introduce a new `cancel_job` method in the `LLMProvider` trait to allow cancellation of ongoing LLM tasks. Implement no-op versions for OpenAI, Anthropic, and Mock providers. Update the WebSocket handler to invoke job cancellation when a session closes, ensuring better resource management and preventing orphaned tasks. Also, fix unused variable warning in `add_suggestion.rs`.
- Updated RUST_LOG environment variable in launch.json to include more detailed debug logging configuration
- Changed trace! logs to debug! in AutomationService for better visibility
- Replaced environment variable usage with get_default_bot helper function
- Improved bot path resolution by using bot name from database
- Added error handling for bot name query
- Simplified S3 bucket name generation using get_default_bot
- Removed unused imports and environment variable dependencies
Add `context_length` and `context_max_length` fields to the `BotResponse` struct across the codebase.
Initialize these fields with default values (0) for existing response constructions and populate them using `ConfigManager` to retrieve the configured maximum context size for each bot.
Import `ConfigManager` in `bot/mod.rs` to access configuration.
These changes enable tracking of the current and maximum context sizes in bot responses, supporting future features that rely on context management.
Changed async Redis operations to synchronous in add_suggestion_keyword function. Removed unnecessary async/await and tokio::spawn since the operations are now blocking. This simplifies the code while maintaining the same functionality of storing suggestions and context state in Redis. Error handling remains robust with proper early returns.
- Remove comments from launch.json configuration
- Simplify Redis key format in set_current_context_keyword by removing context_name
- Remove unused /api/start endpoint and related session start logic
- Change log level from info to trace for config sync messages
- Add trace logging to drive_monitor module
The changes focus on code cleanup and simplification, particularly around session handling and logging. The Redis key format was simplified as the context_name was deemed unnecessary for uniqueness.
Updated references from `redis_client`, `s3_client`, and `custom_conn` to unified names `cache`, `drive`, and `conn` for consistency across modules. Adjusted `add_suggestion_keyword` to use clearer parameter naming and enhanced custom syntax registration for better readability and maintainability.
Implemented a new `bot_from_name` method in `AuthService` to retrieve a bot's UUID by its name, enabling explicit bot selection via a `bot_name` query parameter in the authentication endpoint. Updated `auth_handler` to prioritize this parameter, fall back to the first active bot when necessary, and handle cases where no active bots exist.
Extended `BotOrchestrator` to import the `Suggestion` model and fetch suggestion data from Redis for each user session. Integrated these suggestions into the `BotResponse` payload, ensuring clients receive relevant suggestions alongside bot messages.
These changes improve bot selection flexibility and enrich the response data with contextual suggestions.
- Reordered and deduplicated `use` statements, adding missing imports for `langcache`, `DriveMonitor`, `generate_embeddings`, `Qdrant` client, and `chrono::Utc`.
- Moved Diesel prelude import into the function scope where it is used to limit its visibility.
- Refactored async task spawning to handle errors more clearly and consistently.
- Enhanced string formatting for bucket names and log messages, introducing multiline `warn!` and `error!` calls for better readability.
- Applied consistent code style (spacing, line breaks) across the module to improve maintainability.
- Added bot_id parameter to DriveMonitor::new() and pass it through BotOrchestrator
- Modified DriveMonitor to store bot_id as a field
- Simplified ConfigManager::sync_gbot_config() to accept content directly instead of file path
- Removed file reading logic from sync_gbot_config
- Cleaned up logging messages in config sync
- Improved error handling and logging in DriveMonitor's gbot check
These changes better associate monitoring with specific bots and make the config sync more flexible by accepting content directly rather than requiring a file path.
- Added new ADD_SUGGESTION keyword handler to support sending suggestions in responses
- Removed unused env import in hear_talk module
- Simplified bot_id assignment to use static string
- Added suggestions field to BotResponse struct
- Improved SET_CONTEXT keyword to take both name and value parameters
- Fixed whitespace in auth handler
- Enhanced error handling for suggestion sending
The changes improve the suggestion system functionality while cleaning up unused code and standardizing response handling.
- Changed RUST_LOG level from 'trace' to 'debug' in VSCode launch config
- Simplified auth handler by removing direct bot ID lookup logic
- Added HttpRequest parameter to auth handler
- Implemented bot_from_url helper for bot identification
- Updated auth script path to use bot-specific directory structure
- Removed unused warn import
- Improved error handling for auth script reading and execution
The changes streamline the authentication process by:
1. Moving bot identification to a dedicated helper function
2. Making the auth script path dynamic based on bot name
3. Reducing log verbosity in development
4. Cleaning up unused imports and code
Introduce bot mounting logic in `BotOrchestrator` to load and manage active bots from the database. Enhance `BootstrapManager` by refining Diesel query usage and connection handling for better reliability and maintainability.
- Derive bot_id from BOT_GUID env var
- Guard concurrent runs with Redis
- Read CACHE_URL for Redis connection
- Extend bot memory keyword to accept comma as separator
- Increase LLM timeouts to 180s (local and legacy)
- Update templates to use bot memory (GET_BOT_MEMORY/SET_BOT_MEMORY)
- Fix start script path to announcements.gbai
- Extract LLM generation into `execute_llm_generation` and simplify
keyword handling.
- Prepend system prompt and session context to LLM prompts in
`BotOrchestrator`.
- Parse incoming WebSocket messages as JSON and use the `content` field.
- Add async `get_session_context` and stop injecting Redis context into
conversation history.
- Change default LLM URL to `http://48.217.66.81:8080` throughout the
project.
- Use the existing DB pool instead of creating a separate custom
connection.
- Update `start.bas` to call LLM and set a new context string.
- Refactor web client message handling: separate event processing,
improve streaming logic, reset streaming state on thinking end, and
remove unused test functions.
- Use `tokio::spawn` to run Redis SET for `SET_CONTEXT` in a background
task with detailed tracing.
- Move context retrieval from `BotOrchestrator` to `SessionManager`,
inserting it as a system message in conversation history.
- Remove redundant Redis fetch logic from `BotOrchestrator`.
- Update `DEV.md` to install `valkey-cli`, reorder cargo tools, and
adjust apt commands.
- Add a `SET_CONTEXT "azul bolinha"` example to the announcements
template.
- Introduce event-driven streaming with thinking_start, thinking_end,
and warn events; skip sending analysis content to clients
- Add /api/warn endpoint to dispatch warnings for sessions and channels;
web UI displays alerts
- Emit session_start/session_end events over WebSocket and instrument
logging throughout orchestration
- Update web client: show thinking indicator and warning banners; switch
LiveKit client URL to CDN
- Extend BotOrchestrator with send_event and send_warning, expand
session/tool workflow
- Improve REST endpoints for sessions/history with better logging and
error handling
- Update docs and prompts: DEV.md usage note; adjust dev build_prompt
script
- Migrate core services to store Arc<AppState> and use locks
- Centralize state in AppState with Arc-wrapped managers
- Update handlers to pass Arc<AppState> via web::Data
- Add Default for AppState and initialize components in main
- Update debug.json program path from gbserver to botserver
- Load index.html from web/index.html instead of templates/static
- Initialize OpenAIClient with an empty key and local endpoint
- Remove the old static/index.html file