Esta página se muestra en inglés. Una traducción al español está en curso.
ScriptsMar 29, 2026·2 min de lectura

Ollama — Run LLMs Locally

Run large language models locally on your machine. Supports Llama 3, Mistral, Gemma, Phi, and dozens more. One-command install, OpenAI-compatible API.

Introducción

Ollama makes it dead simple to run open-source LLMs locally. Download and run Llama 3, Mistral, Gemma, Phi, CodeLlama, and 100+ other models with a single command. Includes an OpenAI-compatible API for easy integration.

Best for: Local AI development, privacy-sensitive workloads, offline coding, model experimentation Works with: Any tool supporting OpenAI API format


Popular Models

Model Size Use Case
llama3.1:8b 4.7 GB General purpose
codellama:13b 7.4 GB Code generation
mistral:7b 4.1 GB Fast, capable
gemma2:9b 5.4 GB Google's open model
phi3:mini 2.3 GB Small & efficient
deepseek-coder-v2 8.9 GB Code specialist

API

curl http://localhost:11434/api/chat -d '{
  "model": "llama3.1",
  "messages": [{"role": "user", "content": "Hello!"}]
}'

OpenAI-compatible endpoint at http://localhost:11434/v1/.


FAQ

Q: What is Ollama? A: Run large language models locally on your machine. Supports Llama 3, Mistral, Gemma, Phi, and dozens more. One-command install, OpenAI-compatible API.

Q: How do I install Ollama? A: Check the Quick Use section above for step-by-step installation instructions. Most assets can be set up in under 2 minutes.

🙏

Fuente y agradecimientos

Created by Ollama. Licensed under MIT. ollama/ollama — 120K+ GitHub stars

Discusión

Inicia sesión para unirte a la discusión.
Aún no hay comentarios. Sé el primero en compartir tus ideas.

Activos relacionados