Instead of downloading pre-built binaries (which may require AVX2), compile llama.cpp from source during installation. This ensures: - Works on older CPUs (Sandy Bridge, Haswell, etc.) - Uses GGML_NATIVE=ON to optimize for the current CPU - Binary path updated to build/bin/llama-server Reverts the AVX2 detection that was incorrectly disabling LLM. |
||
|---|---|---|
| .. | ||
| automation | ||
| bootstrap | ||
| bot | ||
| config | ||
| directory | ||
| dns | ||
| kb | ||
| oauth | ||
| package_manager | ||
| secrets | ||
| session | ||
| shared | ||
| mod.rs | ||
| rate_limit.rs | ||
| urls.rs | ||