This website requires JavaScript.
Explore
Help
Sign in
GeneralBots
/
gb
Watch
2
Star
0
Fork
You've already forked gb
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
1
8c3f51a49d
gb
/
stop.sh
5 lines
33 B
Bash
Raw
Normal View
History
Unescape
Escape
Update workspace configuration and submodules
2026-01-30 23:25:02 -03:00
pkill botui
Fix token limits for local llama.cpp server - Add token-aware text truncation utility in core/shared/utils.rs - Fix embedding generators to use 600 token limit (safe under 768) - Fix LLM context limit detection for local models (768 vs 4096) - Prevent 'exceed context size' errors for both embeddings and chat
2026-02-02 11:56:13 -03:00
pkill botserver -9
Update workspace configuration and submodules
2026-01-30 23:25:02 -03:00
Reference in a new issue
Copy permalink