botserver/src/llm
Rodrigo Rodriguez (Pragmatismo) 051c8f720c fix(llm): Compile llama.cpp from source for CPU compatibility
Instead of downloading pre-built binaries (which may require AVX2),
compile llama.cpp from source during installation. This ensures:
- Works on older CPUs (Sandy Bridge, Haswell, etc.)
- Uses GGML_NATIVE=ON to optimize for the current CPU
- Binary path updated to build/bin/llama-server

Reverts the AVX2 detection that was incorrectly disabling LLM.
2025-12-10 08:43:27 -03:00
..
context Add SQLx dependencies for calendar feature 2025-11-27 23:10:43 -03:00
llm_models Remove unused sqlx dependency and related code 2025-11-28 09:27:29 -03:00
prompt_manager - Refactor folder as features. 2025-11-22 22:55:35 -03:00
cache.rs Remove unused sqlx dependency and related code 2025-11-28 09:27:29 -03:00
cache_test.rs - New stuff, 6.1. 2025-11-21 23:23:53 -03:00
episodic_memory.rs feat(email): implement email read tracking with pixel support 2025-12-04 18:15:09 -03:00
local.rs fix(llm): Compile llama.cpp from source for CPU compatibility 2025-12-10 08:43:27 -03:00
mod.rs - New templates. 2025-12-03 07:15:54 -03:00
observability.rs - Split into botui. 2025-12-02 21:09:43 -03:00