Main
Use apfel in two production-friendly ways:
- CLI for scripts: pipe text in/out, attach files with
-f, and request JSON output for automation. - Local server for SDKs/agents: point any OpenAI-compatible client at
http://localhost:11434/v1and keep prompts on-device.
If you maintain agent tooling, start by wiring one integration (CLI or server) and add a smoke test that calls a tiny prompt so failures are obvious after OS updates.
README excerpt (verbatim)
apfel
The free AI already on your Mac.
Apple Silicon Macs ship a built-in LLM via Apple FoundationModels. apfel exposes it as a UNIX tool and a local OpenAI-compatible server. 100% on-device. No API keys, no cloud.
| Mode | Command | What you get |
|---|---|---|
| UNIX tool | apfel "prompt" / echo "text" | apfel |
Pipe-friendly answers, file attachments, JSON output, exit codes |
| OpenAI-compatible server | apfel --serve |
Drop-in local http://localhost:11434/v1 backend for OpenAI SDKs |
apfel --chat - interactive REPL.
Tool calling works in all contexts. 4096-token context.

Requirements & Install
macOS 26 Tahoe+, Apple Silicon (M1+), Apple Intelligence enabled.
brew install apfelUpdate:
brew upgrade apfelBuild from source (Command Line Tools with macOS 26.4 SDK / Swift 6.3, no Xcode):
git clone https://github.com/Arthur-Ficial/apfel.git && cd apfel && make installNix, same-day tap, Mint, mise, troubleshooting: docs/install.md.