This website requires JavaScript.
Explore
Help
Sign in
GeneralBots
/
gb
Watch
2
Star
0
Fork
You've already forked gb
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
92f2c012f0
gb
/
reset.sh
2 lines
38 B
Bash
Raw
Normal View
History
Unescape
Escape
Fix token limits for local llama.cpp server - Add token-aware text truncation utility in core/shared/utils.rs - Fix embedding generators to use 600 token limit (safe under 768) - Fix LLM context limit detection for local models (768 vs 4096) - Prevent 'exceed context size' errors for both embeddings and chat
2026-02-02 11:56:13 -03:00
rm -rf botserver-stack/ ./work/ .env
Reference in a new issue
Copy permalink