gb/reset.sh
Rodrigo Rodriguez (Pragmatismo) 3befc141e5 Fix token limits for local llama.cpp server
- Add token-aware text truncation utility in core/shared/utils.rs
- Fix embedding generators to use 600 token limit (safe under 768)
- Fix LLM context limit detection for local models (768 vs 4096)
- Prevent 'exceed context size' errors for both embeddings and chat
2026-02-02 11:56:13 -03:00

1 line
38 B
Bash
Executable file

rm -rf botserver-stack/ ./work/ .env