When a conversation grows long, the bot’s context window can exceed the LLM’s token limit. **Context compaction** reduces the stored history while preserving essential information.
## Strategies
1.**Summarization**– Periodically run `TALK FORMAT` with a summarization prompt and replace older messages with the summary.
2.**Memory Pruning**– Use `SET_BOT_MEMORY` to store only key facts (e.g., user name, preferences) and discard raw chat logs.
3.**Chunk Rotation**– Keep a sliding window of the most recent *N* messages (configurable via `context_window` in `.gbot/config.csv`).