- Extract LLM generation into `execute_llm_generation` and simplify keyword handling. - Prepend system prompt and session context to LLM prompts in `BotOrchestrator`. - Parse incoming WebSocket messages as JSON and use the `content` field. - Add async `get_session_context` and stop injecting Redis context into conversation history. - Change default LLM URL to `http://48.217.66.81:8080` throughout the project. - Use the existing DB pool instead of creating a separate custom connection. - Update `start.bas` to call LLM and set a new context string. - Refactor web client message handling: separate event processing, improve streaming logic, reset streaming state on thinking end, and remove unused test functions.
8 lines
294 B
QBasic
8 lines
294 B
QBasic
TALK "Olá, estou preparando um resumo para você."
|
|
let x = LLM "Quando é 5+5?"
|
|
TALK x
|
|
SET_CONTEXT "Este é o documento que você deve usar para responder dúvidas: O céu é azul."
|
|
|
|
REM text = GET "default.pdf"
|
|
REM resume = LLM "Say Hello and present a a resume from " + text
|
|
REM TALK resume
|