- Added initial 30s delay to compact prompt scheduler - Implemented async LLM summarization for conversation history - Reduced lock contention by minimizing critical sections - Added fallback to original text if summarization fails - Updated README with guidance for failed requirements - Added new `summarize` method to LLMProvider trait - Improved session manager query with proper DSL usage The changes optimize the prompt compaction process by: 1. Reducing lock contention through better resource management 2. Adding LLM-based summarization for better conversation compression 3. Making the system more resilient with proper error handling 4. Improving documentation for development practices |
||
|---|---|---|
| .. | ||
| basic | ||
| docs | ||
| platform | ||