Added new configuration options for theme colors (green, yellow) and a custom logo URL to enhance branding and visual customization in announcement templates.
- Simplified build_llm_prompt by removing redundant formatting
- Added info logging for LLM model and processed content
- Updated README with development philosophy note
- Adjusted announcement schedule timing from 55 to 59 minutes past the hour
- Update prompt formatting in BotOrchestrator to use clearer labels (SYSTEM/CONTEXT) with emphasis markers
- Remove unused token_ratio field from SystemMetrics struct
- Increase default context size (2048->4096) and prediction length (512->1024) in config
- Clean up metrics calculation by removing redundant token ratio computation
The changes improve readability of system prompts and simplify metrics collection while increasing default model capacity.
- Simplified auth module by removing unused imports and code
- Cleaned up shared models by removing unused structs (Organization, User, Bot, etc.)
- Updated add-req.sh to comment out unused directories
- Modified LLM fallback strategy in README with additional notes about model behaviors
The changes focus on removing unused code and improving documentation while maintaining existing functionality. The auth module was significantly reduced by removing redundant code, and similar cleanup was applied to shared models. The build script was adjusted to reflect currently used directories.
Added interaction count tracking for sessions with Redis or in-memory fallback. Implemented conversation history replacement functionality to compact and update message history. The changes include:
- New AtomicUsize counter in SessionManager for interaction tracking
- increment_and_get_interaction_count method with Redis support
- replace_conversation_history to update and compact message history
- Maintains existing functionality while adding new features
- Reformatted `update_session_context` call for better readability.
- Moved context‑change (type 4) handling to a later stage in the processing pipeline and removed early return, ensuring proper flow.
- Adjusted deduplication logic formatting and clarified condition.
- Restored saving of user messages after context handling, preserving message history.
- Added detailed logging of the LLM prompt for debugging.
- Simplified JSON extraction for `message_type` and applied minor whitespace clean‑ups.
- Overall code refactor improves maintainability and corrects context‑change handling behavior.
- Deduplicate consecutive messages with same role in conversation history
- Add n_predict configuration option for LLM server
- Prevent duplicate message storage in session manager
- Update announcement schedule timing from 37 to 55 minutes
- Add default n_predict value in default bot config
- Refactor cron matching to use individual variables for each time component with additional debug logging
- Replace SETEX with atomic SET NX EX for job locking in Redis
- Add better error handling and logging for job execution tracking
- Skip execution if Redis is unavailable or job is already held
- Add verbose flag to LLM server startup command for better logging
- Added duplicate filtering for suggestions in both backend (Rust) and frontend (JavaScript)
- Changed announcement summary schedule from every 15 minutes to hourly at :37
- Simplified LLM prompt for document summarization
- Updated LLM server configuration with reduced GPU layers and increased context size
- Removed memory mapping and lock settings for LLM server
- Improved HTML formatting and added missing newline at EOF
Added SET_SCHEDULE directive to run the update-summary script every 15 minutes. This ensures the announcements summary stays current by regularly fetching and processing the latest news document.
Enhances BotOrchestrator by integrating optional semantic caching via LangCache for faster LLM responses. Also refactors message saving to occur before and after direct mode handling, and simplifies context change logic for better clarity and flow.
- Added `trace!` logging in `bot_memory.rs` to record retrieved memory values for easier debugging.
- Refactored `BotOrchestrator` in `bot/mod.rs`:
- Removed duplicate session save block and consolidated message persistence.
- Replaced low‑level LLM streaming with a structured `UserMessage` and `stream_response` workflow, improving error handling and readability.
- Updated configuration loading in `config/mod.rs`:
- Imported `get_default_bot` and enhanced `get_config` to fall back to the default bot configuration when the primary query fails.
- Established a fresh DB connection for the fallback path to avoid borrowing issues.
- Remove `scripts_dir` from `AutomationService` and its constructor, as it was unused.
- Drop LLM and embedding server readiness checks; the service now only schedules periodic tasks.
- Increase the health‑check interval from 5 seconds to 15 seconds for reduced load.
- Streamline the LLM keyword prompt to a concise `"User: {}"` format, removing verbose boilerplate.
- Remove unnecessary logging and LLM cache handling code from the bot orchestrator, cleaning up unused environment variable checks and cache queries.
- Introduced `bot_id` column in `system_automations` table (migration 6.0.0.sql) and updated the Diesel schema/model to include it.
- Adjusted migrations to remove the hard‑coded “Update Summary” automation and only create an index on the `name` column.
- Extended the `SET_SCHEDULE` keyword:
- Added a second string argument for the script name.
- Passed the invoking user's `bot_id` to the database layer.
- Updated function signature to accept a full `UserSession` instead of discarding it.
- Modified `execute_set_schedule` to store `bot_id`, script name, and activation flag; added conflict handling on `(bot_id, param)` to update schedule and reset trigger state.
- Updated imports and logging to reflect new parameters.
These changes enable per‑bot automation management, allow specifying the script to run, and improve idempotent schedule updates.
Added new CLEAR_SUGGESTIONS keyword command that clears user suggestions from Redis cache. Implemented the command in the announcements dialog start script to prevent duplicate suggestions. The command handles Redis connection errors gracefully and logs appropriate debug information.
Updated references from `redis_client`, `s3_client`, and `custom_conn` to unified names `cache`, `drive`, and `conn` for consistency across modules. Adjusted `add_suggestion_keyword` to use clearer parameter naming and enhanced custom syntax registration for better readability and maintainability.
Add logic to detect changes in llm-related configuration properties when syncing gbot configs. If any llm- keys differ from stored values, automatically restart LLaMA servers to apply updates. This ensures model servers stay in sync with configuration changes without unnecessary restarts.
Updated `ConfigManager::get_config` to use Diesel query builder instead of raw SQL for improved safety and maintainability. Adjusted parameter naming and integrated schema references. Also refactored `ensure_llama_servers_running` to fetch configuration from the database using `AppState` and `ConfigManager`. Removed unused imports in bootstrap module.
- Split AIConfig into separate LLMConfig and embedding config structs
- Update create_site.rs to use config.llm instead of config.ai
- Improve config loading comments in bootstrap manager
- Add new LLM-related environment variables with defaults
- Maintain backward compatibility with existing config loading
- Clean up unused AIConfig struct and related code
The change better organizes the AI-related configuration by separating LLM and embedding configurations, making the code more maintainable and flexible for future AI service integrations.
- Replace direct database connection establishment with shared `establish_pg_connection` utility
- Add "llm" to required components list in bootstrap manager
- Lower default RUST_LOG level from debug to info in VSCode config
- Clean up imports and connection error messages
- Remove hardcoded database URL strings in favor of centralized connection handling
The changes improve code maintainability by centralizing database connection logic and adding support for the new LLM component in the bootstrap process.
Add logic to automatically upload an initial `config.csv` to the default S3 bucket during bootstrap. Enhance S3 operator creation by ensuring endpoint formatting, setting default bucket and region, and enabling path style. Improve template upload flow by validating bucket existence and creating it if missing. Also include debug logging for better traceability.
Eliminates the `s3_bucket` field from `AppConfig` and deletes related initialization code in `main.rs`. This simplifies configuration management since S3 bucket handling is no longer required or used in the application.
- Removed unused Tauri dependencies and replaced aws-sdk-s3 with opendal for S3 services.
- Cleaned up feature flags in Cargo.toml.
- Simplified the welcome message logic in start.bas and removed redundant comments.
- Introduced AWS SDK dependencies for S3 and CSV handling.
- Implemented logic to check and update the default bot configuration in S3 after component installation.
- Added a new configuration CSV template for bot settings.
- Refactored package manager to register cache component with updated download URL and binary name.
- Updated README and Cargo files to reflect new dependencies and configuration options.
- Derive bot_id from BOT_GUID env var
- Guard concurrent runs with Redis
- Read CACHE_URL for Redis connection
- Extend bot memory keyword to accept comma as separator
- Increase LLM timeouts to 180s (local and legacy)
- Update templates to use bot memory (GET_BOT_MEMORY/SET_BOT_MEMORY)
- Fix start script path to announcements.gbai
- Rename script_name to param in automation flow and DB schema
- Add BotMemory model and bot_memories table
- Remove script_name field from automation
- Enable sqlite support via rusqlite and related crates (optional)
- Update prompts and queries to use param instead of script_name
- Remove deprecated annoucements GBai templates and align add-req.sh
- Refactor main to initialize automation service and simplify startup
Remove unnecessary async spawn in TALK handling and use `try_send` on
the WebSocket channel. Acquire `response_channels` with `try_lock` and
spawn an async task only when falling back to the web adapter. Clean up
debug logs and add missing `env` import. Also delete an extra blank line
in the announcement start script.
- Strip content up to the “final<|message|>” token in OpenAI responses.
- Replace the text‑based connection‑status indicator with a small
flashing circle.
- Simplify updateConnectionStatus to take only the status argument.
- Remove special handling of the initial assistant message and
streamline empty‑state removal.
- Clean up stray blank lines in the announcement template.
- Execute GET requests in a dedicated thread with its own Tokio runtime,
add timeout handling and clearer error messages.
- Tighten `is_safe_path` checks and simplify HTTP/S3 logic.
- Change `llm_keyword` to accept `Arc<AppState>`, add prompt builder,
run LLM generation in an isolated thread with timeout.
- Update keyword registration call in `basic/mod.rs`.
- Convert template script to use `let` declarations and return a
boolean.
- Introduce connection‑status indicator in the web UI with styles,
automatic reconnection attempts, and proper WS/WSS handling for voice.
- Replace async task spawning with `block_in_place` to simplify GET
handling
- Add detailed safety checks for file paths and organization prefixes
- Introduce timeout and keep‑alive settings for HTTP client
- Improve S3 bucket access with existence check, timeouts, and richer
logging
- Switch tracing logs to debug and add warning logs where appropriate
- Update announcement template to retrieve a PDF, generate a resume via
LLM, and set context for subsequent queries.
- Extract LLM generation into `execute_llm_generation` and simplify
keyword handling.
- Prepend system prompt and session context to LLM prompts in
`BotOrchestrator`.
- Parse incoming WebSocket messages as JSON and use the `content` field.
- Add async `get_session_context` and stop injecting Redis context into
conversation history.
- Change default LLM URL to `http://48.217.66.81:8080` throughout the
project.
- Use the existing DB pool instead of creating a separate custom
connection.
- Update `start.bas` to call LLM and set a new context string.
- Refactor web client message handling: separate event processing,
improve streaming logic, reset streaming state on thinking end, and
remove unused test functions.
- Use `tokio::spawn` to run Redis SET for `SET_CONTEXT` in a background
task with detailed tracing.
- Move context retrieval from `BotOrchestrator` to `SessionManager`,
inserting it as a system message in conversation history.
- Remove redundant Redis fetch logic from `BotOrchestrator`.
- Update `DEV.md` to install `valkey-cli`, reorder cargo tools, and
adjust apt commands.
- Add a `SET_CONTEXT "azul bolinha"` example to the announcements
template.