botserver/docs/platform/limits_llm.md
Rodrigo Rodriguez (Pragmatismo) e5a9752caa Enhance bot memory and Redis guards
- Derive bot_id from BOT_GUID env var
- Guard concurrent runs with Redis
- Read CACHE_URL for Redis connection
- Extend bot memory keyword to accept comma as separator
- Increase LLM timeouts to 180s (local and legacy)
- Update templates to use bot memory (GET_BOT_MEMORY/SET_BOT_MEMORY)
- Fix start script path to announcements.gbai
2025-10-16 14:22:28 -03:00

27 lines
836 B
Markdown

## 🚀 **OPTIMAL RANGE:**
- **10-30 KB** - **SWEET SPOT** for quality Rust analysis
- **Fast responses** + **accurate error fixing**
## ⚡ **PRACTICAL MAXIMUM:**
- **50-70 KB** - **ABSOLUTE WORKING LIMIT**
- Beyond this, quality may degrade
## 🛑 **HARD CUTOFF:**
- **~128 KB** - Technical token limit
- But **quality drops significantly** before this
## 🎯 **MY RECOMMENDATION:**
**Send 20-40 KB chunks** for:
-**Best error analysis**
-**Fastest responses**
-**Most accurate Rust fixes**
-**Complete code returns**
## 💡 **PRO STRATEGY:**
1. **Extract problematic module** (15-25 KB)
2. **Include error messages**
3. **I'll fix it and return FULL code**
4. **Iterate if needed**
**You don't need 100KB** - 30KB will get you **BETTER RESULTS** with most Rust compiler errors! 🦀