# Ollama — Run LLMs Locally with One Command > Run Llama 3, Mistral, Gemma, Phi, and 100+ open-source LLMs locally with a single command. OpenAI-compatible API for seamless integration with AI tools. 120,000+ GitHub stars. ## Install Save in your project root: ## Quick Use ```bash # Install (macOS) brew install ollama # Install (Linux) curl -fsSL https://ollama.com/install.sh | sh # Run a model ollama run llama3.1 # Chat starts immediately — no API keys, no cloud, no cost ``` Use as OpenAI-compatible API: ```bash curl http://localhost:11434/v1/chat/completions \ -d '{"model":"llama3.1","messages":[{"role":"user","content":"Hello"}]}' ``` --- ## Intro Ollama is an open-source tool that lets you run Llama 3, Mistral, Gemma, Phi, and 100+ large language models locally with a single command and 120,000+ GitHub stars. No API keys, no cloud costs, no data leaving your machine. It provides an OpenAI-compatible API at `localhost:11434`, making it a drop-in local replacement for cloud LLMs in any tool that supports OpenAI. Best for developers who want privacy, zero latency, and unlimited free inference. Works with: Claude Code (via LiteLLM), Cursor, Continue, LangChain, any OpenAI-compatible client. Setup time: under 2 minutes. --- ## Popular Models | Model | Size | Best For | |-------|------|----------| | `llama3.1` | 8B / 70B | General purpose, coding | | `mistral` | 7B | Fast, multilingual | | `codestral` | 22B | Code generation | | `gemma2` | 9B / 27B | Compact, efficient | | `phi3` | 3.8B / 14B | Small device deployment | | `qwen2.5` | 7B / 72B | Multilingual, math | | `deepseek-coder` | 6.7B / 33B | Code completion | | `llava` | 7B / 13B | Vision + text | ```bash ollama pull llama3.1:70b # Download 70B model ollama pull codestral # Code-specialized model ollama list # See installed models ``` ### OpenAI-Compatible API Point any OpenAI SDK client to `http://localhost:11434/v1`: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:11434/v1", api_key="ollama") response = client.chat.completions.create( model="llama3.1", messages=[{"role": "user", "content": "Write a Python function"}] ) ``` ### Use with AI Tools **Continue (VS Code):** ```json {"models": [{"title": "Llama", "provider": "ollama", "model": "llama3.1"}]} ``` **LiteLLM proxy:** ```bash litellm --model ollama/llama3.1 ``` **LangChain:** ```python from langchain_community.llms import Ollama llm = Ollama(model="llama3.1") ``` ### Custom Modelfiles Create custom models with system prompts and parameters: ```dockerfile FROM llama3.1 SYSTEM "You are a senior Python developer. Always write type-hinted, well-tested code." PARAMETER temperature 0.3 PARAMETER num_ctx 8192 ``` ```bash ollama create my-coder -f Modelfile ollama run my-coder ``` ### Key Stats - 120,000+ GitHub stars - 100+ available models - OpenAI-compatible API - Runs on macOS, Linux, Windows - GPU acceleration (NVIDIA, Apple Silicon) ### FAQ **Q: What is Ollama?** A: Ollama is a tool that runs open-source LLMs locally with one command, providing an OpenAI-compatible API for seamless integration with AI development tools. **Q: Is Ollama free?** A: Yes, completely free and open-source under MIT license. No API keys or usage fees. **Q: What hardware do I need?** A: 8GB RAM for 7B models, 16GB for 13B, 64GB for 70B. Apple Silicon and NVIDIA GPUs are automatically utilized for acceleration. --- ## Source & Thanks > Created by [Ollama](https://github.com/ollama). Licensed under MIT. > > [ollama](https://github.com/ollama/ollama) — ⭐ 120,000+ Thanks to the Ollama team for making local LLM inference effortless. --- ## 快速使用 ```bash # macOS 安装 brew install ollama # 运行模型 ollama run llama3.1 # 立即开始对话 — 无需 API Key,无需云端,零费用 ``` --- ## 简介 Ollama 是一个开源工具,一条命令即可在本地运行 Llama 3、Mistral、Gemma 等 100+ 大语言模型,GitHub 120,000+ stars。无需 API 密钥,无云端费用,数据不离开你的机器。提供 OpenAI 兼容 API,可无缝集成到任何 AI 工具中。适合需要隐私、零延迟和无限免费推理的开发者。 --- ## 来源与感谢 > Created by [Ollama](https://github.com/ollama). Licensed under MIT. > > [ollama](https://github.com/ollama/ollama) — ⭐ 120,000+ --- Source: https://tokrepo.com/en/workflows/98b30827-ac0a-4067-8f64-7360f2a13995 Author: AI Open Source