botserver/src/core
Rodrigo Rodriguez (Pragmatismo) e443470295 fix(llm): Compile llama.cpp from source for CPU compatibility
Instead of downloading pre-built binaries (which may require AVX2),
compile llama.cpp from source during installation. This ensures:
- Works on older CPUs (Sandy Bridge, Haswell, etc.)
- Uses GGML_NATIVE=ON to optimize for the current CPU
- Binary path updated to build/bin/llama-server

Reverts the AVX2 detection that was incorrectly disabling LLM.
2025-12-10 08:43:27 -03:00
..
automation - New templates. 2025-12-03 07:15:54 -03:00
bootstrap fix(bootstrap): Improve Vault startup diagnostics and error handling 2025-12-10 08:30:49 -03:00
bot Implement real database functions, remove TODOs and placeholders 2025-12-03 22:23:30 -03:00
config fix(bootstrap): Improve Vault startup diagnostics and error handling 2025-12-10 08:30:49 -03:00
directory - Split into botui. 2025-12-02 21:09:43 -03:00
dns - Split into botui. 2025-12-02 21:09:43 -03:00
kb Implement real database functions, remove TODOs and placeholders 2025-12-03 22:23:30 -03:00
oauth chore: Remove emoji icons from log messages and UI 2025-12-09 07:55:11 -03:00
package_manager fix(llm): Compile llama.cpp from source for CPU compatibility 2025-12-10 08:43:27 -03:00
secrets Fix config.csv loading on startup 2025-12-08 00:19:29 -03:00
session - New templates. 2025-12-03 07:15:54 -03:00
shared fix(bootstrap): Improve Vault startup diagnostics and error handling 2025-12-10 08:30:49 -03:00
mod.rs feat(auth): Add OAuth login for Google, Discord, Reddit, Twitter, Microsoft, Facebook 2025-12-04 22:53:40 -03:00
rate_limit.rs Implement real database functions, remove TODOs and placeholders 2025-12-03 22:23:30 -03:00
urls.rs feat(console): Show UI immediately with live system logs 2025-12-08 23:35:33 -03:00