- Removed unused `id` and `app_state` fields from `ChatPanel`; updated constructor to accept but ignore the state, reducing memory footprint. - Switched database access in `ChatPanel` from a raw `Mutex` lock to a connection pool (`app_state.conn.get()`), improving concurrency and error handling. - Reordered and cleaned up imports in `status_panel.rs` and formatted struct fields for readability. - Updated VS Code launch configuration to pass `--noui` argument, enabling headless mode for debugging. - Bumped several crate versions in `Cargo.lock` (e.g., `bitflags` to 2.10.0, `syn` to 2.0.108, `cookie` to 0.16.2) and added the new `ashpd` dependency, aligning the project with latest library releases. |
||
|---|---|---|
| .. | ||
| add-keyword.md | ||
| add-model.md | ||
| add-service.md | ||
| botserver.md | ||
| cli.md | ||
| doc-guide-topic.md | ||
| fix-errors.md | ||
| ide.md | ||
| README.md | ||
| shared.md | ||
LLM Strategy & Workflow
Fallback Strategy (After 3 attempts / 10 minutes):
When initial attempts fail, sequentially try these LLMs:
- DeepSeek-V3-0324 (good architect, adventure, reliable, let little errors just to be fixed by gpt-*)
- gpt-5-chat (slower, let warnings...)
- gpt-oss-120b
- Claude (Web): Copy only the problem statement and create unit tests. Create/extend UI.
Development Workflow:
- One requirement at a time with sequential commits
- On unresolved error: Stop and use add-req.sh, and consult Claude for guidance. with DeepThining in DeepSeek also, with Web turned on.
- Change progression: Start with DeepSeek, conclude with gpt-oss-120b
- If a big req. fail, specify a @code file that has similar pattern or sample from official docs.
- Final validation: Use prompt "cargo check" with gpt-oss-120b
- Be humble, one requirement, one commit. But sometimes, freedom of caos is welcome - when no deadlines are set.
- Fix manually in case of dangerous trouble.
- Keep in the source codebase only deployed and tested source, no lab source code in main project. At least, use optional features to introduce new behaviour gradually in PRODUCTION.
- Transform good articles into prompts for the coder.
- Switch to libraries that have LLM affinity.
- Ensure 'continue' on LLMs, they can EOF and say are done, but got more to output.