# LocalAI — Run Any AI Model Locally, No GPU > LocalAI is an open-source AI engine running LLMs, vision, voice, and image models locally. 44.6K+ GitHub stars. OpenAI/Anthropic-compatible API, 35+ backends, MCP, agents. MIT licensed. ## Install Save in your project root: ## Quick Use ```bash # Run with Docker docker run -p 8080:8080 localai/localai # Or native install (macOS) brew install localai # Run a model from the gallery local-ai run llama-3.2-1b-instruct:q4_k_m # Query via OpenAI-compatible API curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{ "model": "llama-3.2-1b-instruct", "messages": [{"role": "user", "content": "Hello!"}] }' ``` --- ## Intro LocalAI is an open-source AI engine that lets you run LLMs, vision, voice, image, and video models locally without requiring a GPU. With 44,600+ GitHub stars and MIT license, it provides OpenAI and Anthropic-compatible APIs, supports 35+ backends across NVIDIA, AMD, Intel, Apple Silicon, Vulkan, and CPU-only hardware. LocalAI includes enterprise features like authentication, user quotas, RBAC, autonomous AI agents with tool use, RAG, and MCP integration. Run models from HuggingFace, Ollama registries, and OCI images. **Best for**: Teams wanting a self-hosted AI platform with multi-model support and enterprise features **Works with**: Claude Code, OpenAI Codex, Cursor, Gemini CLI, Windsurf **Hardware**: NVIDIA, AMD, Intel, Apple Silicon, Vulkan, CPU-only --- ## Key Features - **Multi-model**: LLMs, vision, voice, image, video in one platform - **No GPU required**: Runs on CPU, optional GPU acceleration - **OpenAI + Anthropic API**: Drop-in replacement for both providers - **35+ backends**: llama.cpp, Transformers, diffusers, whisper, and more - **AI agents**: Autonomous agents with tool use, RAG, MCP - **Enterprise**: Auth, quotas, RBAC, audit logging - **Model gallery**: One-command install from HuggingFace, Ollama, OCI --- ### FAQ **Q: What is LocalAI?** A: LocalAI is a local AI engine with 44.6K+ stars running LLMs, vision, voice, and image models. OpenAI/Anthropic-compatible API, 35+ backends, no GPU required. MIT licensed. **Q: How do I install LocalAI?** A: `docker run -p 8080:8080 localai/localai` or `brew install localai` on macOS. Then `local-ai run `. --- ## Source & Thanks > Created by [Ettore Di Giacinto](https://github.com/mudler). Licensed under MIT. > [mudler/LocalAI](https://github.com/mudler/LocalAI) — 44,600+ GitHub stars --- Source: https://tokrepo.com/en/workflows/34c0d47e-fb4c-442b-819c-9b6a5f921e13 Author: AI Open Source