# Ollama — Run LLMs Locally > Run large language models locally on your machine. Supports Llama 3, Mistral, Gemma, Phi, and dozens more. One-command install, OpenAI-compatible API. ## Install Save as a script file and run: ## Quick Use ```bash # Install curl -fsSL https://ollama.com/install.sh | sh # Run a model ollama run llama3.1 ``` --- ## Intro Ollama makes it dead simple to run open-source LLMs locally. Download and run Llama 3, Mistral, Gemma, Phi, CodeLlama, and 100+ other models with a single command. Includes an OpenAI-compatible API for easy integration. **Best for**: Local AI development, privacy-sensitive workloads, offline coding, model experimentation **Works with**: Any tool supporting OpenAI API format --- ## Popular Models | Model | Size | Use Case | |-------|------|----------| | `llama3.1:8b` | 4.7 GB | General purpose | | `codellama:13b` | 7.4 GB | Code generation | | `mistral:7b` | 4.1 GB | Fast, capable | | `gemma2:9b` | 5.4 GB | Google's open model | | `phi3:mini` | 2.3 GB | Small & efficient | | `deepseek-coder-v2` | 8.9 GB | Code specialist | ## API ```bash curl http://localhost:11434/api/chat -d '{ "model": "llama3.1", "messages": [{"role": "user", "content": "Hello!"}] }' ``` OpenAI-compatible endpoint at `http://localhost:11434/v1/`. --- ### FAQ **Q: What is Ollama?** A: Run large language models locally on your machine. Supports Llama 3, Mistral, Gemma, Phi, and dozens more. One-command install, OpenAI-compatible API. **Q: How do I install Ollama?** A: Check the Quick Use section above for step-by-step installation instructions. Most assets can be set up in under 2 minutes. ## Source & Thanks > Created by [Ollama](https://github.com/ollama). Licensed under MIT. > [ollama/ollama](https://github.com/ollama/ollama) — 120K+ GitHub stars --- Source: https://tokrepo.com/en/workflows/0eefb7ad-754e-4f35-8967-586ebf4c2a6a Author: Script Depot