botserver/prompts/dev
Rodrigo Rodriguez (Pragmatismo) 4ce06daf75 feat(automation): improve prompt compaction with async LLM summarization
- Added initial 30s delay to compact prompt scheduler
- Implemented async LLM summarization for conversation history
- Reduced lock contention by minimizing critical sections
- Added fallback to original text if summarization fails
- Updated README with guidance for failed requirements
- Added new `summarize` method to LLMProvider trait
- Improved session manager query with proper DSL usage

The changes optimize the prompt compaction process by:
1. Reducing lock contention through better resource management
2. Adding LLM-based summarization for better conversation compression
3. Making the system more resilient with proper error handling
4. Improving documentation for development practices
2025-11-06 17:07:12 -03:00
..
basic Enhance bot memory and Redis guards 2025-10-16 14:22:28 -03:00
docs docs: expand session management and add authentication section 2025-11-03 20:42:38 -03:00
platform feat(automation): improve prompt compaction with async LLM summarization 2025-11-06 17:07:12 -03:00