# llamafile — Single-File LLM, No Install Needed > llamafile distributes LLMs as single-file executables that run on any OS. 23.9K+ GitHub stars. No installation, cross-platform, built on llama.cpp + Cosmopolitan. Apache 2.0. ## Install Save in your project root: ## Quick Use ```bash # Download and run (no install!) curl -LO https://huggingface.co/mozilla-ai/llamafile_0.10.0/resolve/main/Qwen3.5-0.8B-Q8_0.llamafile chmod +x Qwen3.5-0.8B-Q8_0.llamafile ./Qwen3.5-0.8B-Q8_0.llamafile # Opens a web UI at http://localhost:8080 # Also includes whisperfile for speech-to-text ``` Works on macOS, Linux, Windows, FreeBSD — same file, no dependencies. --- ## Intro llamafile enables distributing and running large language models as single-file executables that work across multiple operating systems and CPU architectures with zero installation. With 23,900+ GitHub stars and Apache 2.0 license, it combines llama.cpp with Cosmopolitan Libc to create portable, installationless applications. Download one file, make it executable, run it — instant AI on any platform. Also includes whisperfile for speech-to-text in the same single-file format. **Best for**: Anyone who wants the absolute simplest way to run an LLM — one file, no setup **Works with**: Claude Code, OpenAI Codex, Cursor, Gemini CLI, Windsurf **Platforms**: macOS, Linux, Windows, FreeBSD (same binary) --- ## Key Features - **Single file**: Entire LLM + runtime in one executable - **Zero install**: No Python, no Docker, no dependencies - **Cross-platform**: Same file runs on macOS, Linux, Windows, FreeBSD - **Built-in web UI**: Opens localhost:8080 with chat interface - **whisperfile**: Speech-to-text in the same single-file format - **Built on llama.cpp**: Full model compatibility and performance - **Cosmopolitan Libc**: Universal binary technology for portability --- ### FAQ **Q: What is llamafile?** A: llamafile packages LLMs as single-file executables with 23.9K+ stars. No installation — download, chmod +x, run. Works on macOS/Linux/Windows/FreeBSD. Apache 2.0. **Q: How do I use llamafile?** A: Download a .llamafile from HuggingFace, `chmod +x` it, and run it. A web UI opens at localhost:8080. No other setup needed. --- ## Source & Thanks > Created by [Mozilla](https://github.com/Mozilla-Ocho). Licensed under Apache 2.0. > [Mozilla-Ocho/llamafile](https://github.com/Mozilla-Ocho/llamafile) — 23,900+ GitHub stars --- Source: https://tokrepo.com/en/workflows/f30a667e-1b6a-4140-a92f-b5027f16c8bc Author: AI Open Source