Commit graph

11 commits

Author SHA1 Message Date
2be85773ab Add trace logging to AutomationService and increase timeout values in LLM commands 2025-10-17 13:11:49 -03:00
09a9c8f3cd Enhance bot memory and Redis guards
- Derive bot_id from BOT_GUID env var
- Guard concurrent runs with Redis
- Read CACHE_URL for Redis connection
- Extend bot memory keyword to accept comma as separator
- Increase LLM timeouts to 180s (local and legacy)
- Update templates to use bot memory (GET_BOT_MEMORY/SET_BOT_MEMORY)
- Fix start script path to announcements.gbai
2025-10-16 14:22:28 -03:00
bb9c733fd5 - GET ketyowrd for buckets. 2025-10-15 12:45:15 -03:00
e77362e09a Refactor LLM flow, add prompts, fix UI streaming
- Extract LLM generation into `execute_llm_generation` and simplify
  keyword handling.
- Prepend system prompt and session context to LLM prompts in
  `BotOrchestrator`.
- Parse incoming WebSocket messages as JSON and use the `content` field.
- Add async `get_session_context` and stop injecting Redis context into
  conversation history.
- Change default LLM URL to `http://48.217.66.81:8080` throughout the
  project.
- Use the existing DB pool instead of creating a separate custom
  connection.
- Update `start.bas` to call LLM and set a new context string.
- Refactor web client message handling: separate event processing,
  improve streaming logic, reset streaming state on thinking end, and
  remove unused test functions.
2025-10-15 01:14:37 -03:00
a88a1613a1 - Fixing ngl param. 2025-10-14 17:02:11 -03:00
9e9db27bca - Fixing parameters for llama.cpp. 2025-10-14 17:00:00 -03:00
3f85f95af4 - Fine tunning GTP OSS 20B. 2025-10-14 16:57:50 -03:00
a16d9affe7 - main.rs is compiling again. 2025-10-11 20:02:14 -03:00
a1dd7b5826 - Remove all compilation errors. 2025-10-11 12:29:03 -03:00
2f77b68294 - Warning removal and restore of old code. 2025-10-07 07:16:03 -03:00
959f67aa83 - Fixing compilation errors. 2025-10-06 20:06:43 -03:00
Renamed from src/llm/llm_local.rs (Browse further)