- Add token-aware text truncation utility in core/shared/utils.rs - Fix embedding generators to use 600 token limit (safe under 768) - Fix LLM context limit detection for local models (768 vs 4096) - Prevent 'exceed context size' errors for both embeddings and chat
4 lines
33 B
Bash
Executable file
4 lines
33 B
Bash
Executable file
pkill botui
|
|
pkill botserver -9
|
|
|
|
|