Homelab
Building an AI Homelab
As part of my job over the past year, I’ve been deploying and evaluating LLMs locally on various hardware setups. Naturally, I became well versed in the open source tools and models available and various hardware options. So I decided to build my own homelab for local inference.
Llama-Server is All You Need (Plus a Management Layer)
If you’re running LLMs locally, you’ve probably used Ollama or LM Studio. They’re both excellent tools, but I hit some limitations. LM Studio is primarily a desktop app that can’t run truly headless, while Ollama requires SSH-ing into your server every time you want to switch models or adjust parameters.