01
omlx: continuous-batching MLX server with a menu-bar UI
The piece that was missing between "I downloaded a GGUF" and "this is actually a server I can build against."
Edition
06
picks
Local-first AI stops being a hobby project. The #2 post on HN today (1,210 points) is "Local AI needs to be the norm" — and the slate of releases that actually shipped today reads like an answer to it. A continuous-batching MLX server that lives in your Mac's menu bar. A terminal tool that figures out which model runs on the silicon you already have. Warp open-sourcing its agentic development environment. An autonomous AI team you spin up with one `npx` command and zero API key forms. A meeting transcriber that never phones home. And a self-hosted gateway that fronts every major model behind one URL you control. We dropped today's heaviest agent-framework picks (kagent, ARIS, hermes-agent's messaging fanout) — those are agent-runtime stories, not the local-stack story — and the ProductHunt SaaS firehose, which was 49 vertical agents looking for problems to solve.
01
The piece that was missing between "I downloaded a GGUF" and "this is actually a server I can build against."
02
The Steam hardware survey, but for whether GLM-4 is going to OOM your laptop.
03
The terminal that started the agent-IDE category opens up its own source.
04
The "no API key web form" line in the README is the whole pitch.
05
Whisper + Ollama in one Tauri app, everything stays on the laptop.
06
Multi-key rotation, channel failover, and a web admin in a single Go binary you can run yourself.
Free. Unsubscribe by replying with one word. No tracking pixels in the email.