Add `context_length` and `context_max_length` fields to the `BotResponse` struct across the codebase.
Initialize these fields with default values (0) for existing response constructions and populate them using `ConfigManager` to retrieve the configured maximum context size for each bot.
Import `ConfigManager` in `bot/mod.rs` to access configuration.
These changes enable tracking of the current and maximum context sizes in bot responses, supporting future features that rely on context management.
Removed the redundant "I heard: " prefix from the message content in the execute_talk function. The message is now stored as-is without additional formatting, making it more straightforward and consistent with other message handling.
Changed async Redis operations to synchronous in add_suggestion_keyword function. Removed unnecessary async/await and tokio::spawn since the operations are now blocking. This simplifies the code while maintaining the same functionality of storing suggestions and context state in Redis. Error handling remains robust with proper early returns.
- Remove comments from launch.json configuration
- Simplify Redis key format in set_current_context_keyword by removing context_name
- Remove unused /api/start endpoint and related session start logic
- Change log level from info to trace for config sync messages
- Add trace logging to drive_monitor module
The changes focus on code cleanup and simplification, particularly around session handling and logging. The Redis key format was simplified as the context_name was deemed unnecessary for uniqueness.
Updated references from `redis_client`, `s3_client`, and `custom_conn` to unified names `cache`, `drive`, and `conn` for consistency across modules. Adjusted `add_suggestion_keyword` to use clearer parameter naming and enhanced custom syntax registration for better readability and maintainability.
- Split AIConfig into separate LLMConfig and embedding config structs
- Update create_site.rs to use config.llm instead of config.ai
- Improve config loading comments in bootstrap manager
- Add new LLM-related environment variables with defaults
- Maintain backward compatibility with existing config loading
- Clean up unused AIConfig struct and related code
The change better organizes the AI-related configuration by separating LLM and embedding configurations, making the code more maintainable and flexible for future AI service integrations.
- Added new ADD_SUGGESTION keyword handler to support sending suggestions in responses
- Removed unused env import in hear_talk module
- Simplified bot_id assignment to use static string
- Added suggestions field to BotResponse struct
- Improved SET_CONTEXT keyword to take both name and value parameters
- Fixed whitespace in auth handler
- Enhanced error handling for suggestion sending
The changes improve the suggestion system functionality while cleaning up unused code and standardizing response handling.
Changed the Redis RPUSH command to return the new list length (`Result<i64, RedisError>`) and added a debug log that includes this length. Updated the HSET command to capture its result (`Result<i64, RedisError>`), logging the number of fields added on success and handling errors explicitly. Removed unnecessary `unwrap_or_else` and added comments to clarify behavior, improving observability and error handling for suggestion storage.
- Added AWS SDK S3 dependencies including aws-config, aws-sdk-s3, and related crates
- Removed opendal dependency and replaced with AWS SDK S3 client
- Implemented new get_file_content helper function using AWS SDK
- Updated MinIOHandler to use AWS SDK client instead of opendal Operator
- Modified file change detection to work with AWS SDK's S3 client
The change was made to standardize on AWS's official SDK for S3 operations, which provides better maintenance and feature support compared to the opendal crate. This also aligns with AWS best practices for interacting with S3 services.
Added new `add_suggestion` module to support suggestion handling logic.
Updated `index.html` to include a dynamic suggestions container that fetches and displays context suggestions.
This improves user experience by enabling quick context changes through interactive suggestion buttons.
Eliminates the `s3_bucket` field from `AppConfig` and deletes related initialization code in `main.rs`. This simplifies configuration management since S3 bucket handling is no longer required or used in the application.
- Derive bot_id from BOT_GUID env var
- Guard concurrent runs with Redis
- Read CACHE_URL for Redis connection
- Extend bot memory keyword to accept comma as separator
- Increase LLM timeouts to 180s (local and legacy)
- Update templates to use bot memory (GET_BOT_MEMORY/SET_BOT_MEMORY)
- Fix start script path to announcements.gbai
- Rename script_name to param in automation flow and DB schema
- Add BotMemory model and bot_memories table
- Remove script_name field from automation
- Enable sqlite support via rusqlite and related crates (optional)
- Update prompts and queries to use param instead of script_name
- Remove deprecated annoucements GBai templates and align add-req.sh
- Refactor main to initialize automation service and simplify startup
Remove unnecessary async spawn in TALK handling and use `try_send` on
the WebSocket channel. Acquire `response_channels` with `try_lock` and
spawn an async task only when falling back to the web adapter. Clean up
debug logs and add missing `env` import. Also delete an extra blank line
in the announcement start script.
- Execute GET requests in a dedicated thread with its own Tokio runtime,
add timeout handling and clearer error messages.
- Tighten `is_safe_path` checks and simplify HTTP/S3 logic.
- Change `llm_keyword` to accept `Arc<AppState>`, add prompt builder,
run LLM generation in an isolated thread with timeout.
- Update keyword registration call in `basic/mod.rs`.
- Convert template script to use `let` declarations and return a
boolean.
- Introduce connection‑status indicator in the web UI with styles,
automatic reconnection attempts, and proper WS/WSS handling for voice.
- Replace async task spawning with `block_in_place` to simplify GET
handling
- Add detailed safety checks for file paths and organization prefixes
- Introduce timeout and keep‑alive settings for HTTP client
- Improve S3 bucket access with existence check, timeouts, and richer
logging
- Switch tracing logs to debug and add warning logs where appropriate
- Update announcement template to retrieve a PDF, generate a resume via
LLM, and set context for subsequent queries.
- Extract LLM generation into `execute_llm_generation` and simplify
keyword handling.
- Prepend system prompt and session context to LLM prompts in
`BotOrchestrator`.
- Parse incoming WebSocket messages as JSON and use the `content` field.
- Add async `get_session_context` and stop injecting Redis context into
conversation history.
- Change default LLM URL to `http://48.217.66.81:8080` throughout the
project.
- Use the existing DB pool instead of creating a separate custom
connection.
- Update `start.bas` to call LLM and set a new context string.
- Refactor web client message handling: separate event processing,
improve streaming logic, reset streaming state on thinking end, and
remove unused test functions.
- Use `tokio::spawn` to run Redis SET for `SET_CONTEXT` in a background
task with detailed tracing.
- Move context retrieval from `BotOrchestrator` to `SessionManager`,
inserting it as a system message in conversation history.
- Remove redundant Redis fetch logic from `BotOrchestrator`.
- Update `DEV.md` to install `valkey-cli`, reorder cargo tools, and
adjust apt commands.
- Add a `SET_CONTEXT "azul bolinha"` example to the announcements
template.
- Migrate core services to store Arc<AppState> and use locks
- Centralize state in AppState with Arc-wrapped managers
- Update handlers to pass Arc<AppState> via web::Data
- Add Default for AppState and initialize components in main
- Update debug.json program path from gbserver to botserver