- Export handleMentionInput, handleMentionKeydown, hideMentionDropdown to window object
- Fix chat-init.js to use window.handleMentionInput with proper checks
- Prevents ReferenceError when chat initializes
- Ensures suggestion buttons and switchers work correctly
- Remove lógica hasHtmlTags que injetava HTML cru
- Sempre usa escapeHtml para exibir conteúdo como texto
- Corrige problema de tags HTML aparecendo na página
- Remove streaming de chunks LLM, acumula resposta completa antes de enviar
- Corrige variável 'action' para 'actionData' no click handler de suggestions
- Adiciona fallback window.sendMessage() se WebSocket não estiver aberto
- Adiciona guard DOMContentLoaded no chat-init.js
- Adiciona cache-busting (?v=4) no chat.html
Impacto:
- start.bas executa corretamente ao conectar WebSocket
- HTML não é mais truncado (tags fecham corretamente)
- Sugestões executam tool invocations via WebSocket
- botui/chat-messages.js: HTML chunks now accumulated without rendering,
only showing loading indicator. When is_complete=true, full HTML
rendered at once. Text/markdown continues streaming normally.
- botserver/mod.rs: Remove unused html_buffer variable
- drive_monitor/monitor.rs: Change CHECK_INTERVAL_SECS from 1 to 2
- CI workflow: Fix paths to use target/fast/ instead of target/debug/
and target/release/
- botserver: implemented tag-aware streaming to prevent broken HTML chunks
- botserver: disabled automatic HTML-to-Markdown conversion to preserve rich design
- botserver/llm: added Claude 3.7 thinking/reasoning support
- botui: fixed chat-messages.js to allow rich HTML rendering and stop tag stripping
- botui: updated CI/CD to build botui in release mode with embedded UI
- Backend: Add strip_html_tags() function to remove HTML from XLSX cells
- Frontend: Strip HTML tags before displaying bot messages
- Prompt: Update PROMPT.md to instruct GPT not to show raw HTML
Fixes issue where XLSX cell content with HTML formatting
was being displayed as raw HTML tags in chat responses.
- Added SWITCHER_TOGGLE message type (8) for reprocessing last user message with active switchers
- Backend: Handler fetches last user question from DB, mutates message in-place, injects switcher prompts into system_prompt
- Backend: Switcher replays skip message_history save to avoid duplication
- Frontend: toggleSwitcher() sends SWITCHER_TOGGLE when input empty, sendMessage() when text present
- Frontend: Added TOOL_EXEC and SWITCHER_TOGGLE to MessageType constants
- Fixed session_id shadowing bug in DB query (used session_id_for_query)
- Preserves conversation history for LLM context when reprocessing with switchers
- Split partials/chat.html (1513→70 lines) into 8 JS modules:
chat-state.js, chat-switchers.js, chat-mentions.js,
chat-messages.js, chat-suggestions.js, chat-theme.js,
chat-websocket.js, chat-init.js
- Centralized state in ChatState global object
- Switcher chips auto-activate on switch_context suggestion action
- active_switchers sent in every WS message payload
- Removed old chat-main.js (merged into modules)
- Split vibe.html into vibe/ module directory with CSS extraction
- Updated standalone chat/chat.html to use same modules
- Event listeners were lost when renderSwitchers() re-created DOM
- Now using event delegation on parent container
- Listener attached once, persists across re-renders
- Added logging to verify active_switchers payload
Fixes switcher toggle not persisting and LLM modifier not being sent.
- chat.html reduced from 1623 lines to 59 lines
- Created chat-switchers.js for switcher state management
- Created chat-messages.js for message rendering
- Created chat-main.js for initialization and coordination
- Added console logging to debug switcher toggle functionality
- Follows AGENTS.md 450-line limit rule
- Detect HTML content (starts with <) in streaming messages and
bypass marked.parse() to render directly as innerHTML
- marked.parse() was corrupting the LLM's raw HTML output by
treating it as Markdown (escaping tags, wrapping in <p>, etc.)
- Updated PROMPT.md for Salesianos to be more explicit about
returning ramal data directly from KB context without asking
for unnecessary clarification
- Fixed ramais.bas tool (removed invalid BEGIN/END syntax)