Configs2026年4月1日·1 分钟阅读

llamafile — Single-File LLM, No Install Needed

llamafile distributes LLMs as single-file executables that run on any OS. 23.9K+ GitHub stars. No installation, cross-platform, built on llama.cpp + Cosmopolitan. Apache 2.0.

TO
TokRepo精选 · Community
快速使用

先拿来用,再决定要不要深挖

这里应该同时让用户和 Agent 知道第一步该复制什么、安装什么、落到哪里。

# Download and run (no install!)
curl -LO https://huggingface.co/mozilla-ai/llamafile_0.10.0/resolve/main/Qwen3.5-0.8B-Q8_0.llamafile
chmod +x Qwen3.5-0.8B-Q8_0.llamafile
./Qwen3.5-0.8B-Q8_0.llamafile

# Opens a web UI at http://localhost:8080
# Also includes whisperfile for speech-to-text

Works on macOS, Linux, Windows, FreeBSD — same file, no dependencies.


介绍

llamafile enables distributing and running large language models as single-file executables that work across multiple operating systems and CPU architectures with zero installation. With 23,900+ GitHub stars and Apache 2.0 license, it combines llama.cpp with Cosmopolitan Libc to create portable, installationless applications. Download one file, make it executable, run it — instant AI on any platform. Also includes whisperfile for speech-to-text in the same single-file format.

Best for: Anyone who wants the absolute simplest way to run an LLM — one file, no setup Works with: Claude Code, OpenAI Codex, Cursor, Gemini CLI, Windsurf Platforms: macOS, Linux, Windows, FreeBSD (same binary)


Key Features

  • Single file: Entire LLM + runtime in one executable
  • Zero install: No Python, no Docker, no dependencies
  • Cross-platform: Same file runs on macOS, Linux, Windows, FreeBSD
  • Built-in web UI: Opens localhost:8080 with chat interface
  • whisperfile: Speech-to-text in the same single-file format
  • Built on llama.cpp: Full model compatibility and performance
  • Cosmopolitan Libc: Universal binary technology for portability

FAQ

Q: What is llamafile? A: llamafile packages LLMs as single-file executables with 23.9K+ stars. No installation — download, chmod +x, run. Works on macOS/Linux/Windows/FreeBSD. Apache 2.0.

Q: How do I use llamafile? A: Download a .llamafile from HuggingFace, chmod +x it, and run it. A web UI opens at localhost:8080. No other setup needed.


🙏

来源与感谢

Created by Mozilla. Licensed under Apache 2.0. Mozilla-Ocho/llamafile — 23,900+ GitHub stars

相关资产