Commit graph

8 commits

Author SHA1 Message Date
f626793d77 Refactor LLM flow, add prompts, fix UI streaming
- Extract LLM generation into `execute_llm_generation` and simplify
  keyword handling.
- Prepend system prompt and session context to LLM prompts in
  `BotOrchestrator`.
- Parse incoming WebSocket messages as JSON and use the `content` field.
- Add async `get_session_context` and stop injecting Redis context into
  conversation history.
- Change default LLM URL to `http://48.217.66.81:8080` throughout the
  project.
- Use the existing DB pool instead of creating a separate custom
  connection.
- Update `start.bas` to call LLM and set a new context string.
- Refactor web client message handling: separate event processing,
  improve streaming logic, reset streaming state on thinking end, and
  remove unused test functions.
2025-10-15 01:14:37 -03:00
d3c486094f - Fixing ngl param. 2025-10-14 17:02:11 -03:00
552fb56f54 - Fixing parameters for llama.cpp. 2025-10-14 17:00:00 -03:00
277f21ab18 - Fine tunning GTP OSS 20B. 2025-10-14 16:57:50 -03:00
147d12b7c0 - main.rs is compiling again. 2025-10-11 20:02:14 -03:00
283774aa0f - Remove all compilation errors. 2025-10-11 12:29:03 -03:00
8a9cd104d6 - Warning removal and restore of old code. 2025-10-07 07:16:03 -03:00
c0c470e9aa - Fixing compilation errors. 2025-10-06 20:06:43 -03:00
Renamed from src/llm/llm_local.rs (Browse further)