ConfigsApr 2, 2026·3 min read
Ollama — Run LLMs Locally with One Command
Get up and running with Llama, Mistral, Gemma, DeepSeek and 100+ models locally. Simple CLI, OpenAI-compatible API. 95K+ stars.
TO
TokRepo精选 · Community
Quick Use
Use it first, then decide how deep to go
This block should tell both the user and the agent what to copy, install, and apply first.
```bash
# Install (macOS/Linux)
curl -fsSL https://ollama.com/install.sh | sh
# Run a model
ollama run llama3.2
ollama run deepseek-r1
ollama run gemma2
```
```bash
# Use the OpenAI-compatible API
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "llama3.2", "messages": [{"role": "user", "content": "Hello!"}]}'
```
Also available on Windows (installer) and Docker: `docker run -d -p 11434:11434 ollama/ollama`
🙏
Source & Thanks
- GitHub: [ollama/ollama](https://github.com/ollama/ollama)
- License: MIT
- Stars: 95,000+
- Maintainer: Ollama team
Thanks to the Ollama team for democratizing local LLM access, making it as easy to run a language model as it is to run a Docker container.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Related Assets
LaVague — Natural Language Web Automation
Give a text objective, LaVague drives the browser to accomplish it. Large Action Model framework for web agents. 6.3K+ stars.
TokRepo精选
Trae Agent — AI Coding Agent by ByteDance
Open-source autonomous coding agent for software engineering tasks. Multi-provider LLM support. By ByteDance. 11K+ stars.
TokRepo精选
bolt.diy — AI Full-Stack App Builder, Any LLM
Community fork of Bolt.new. Prompt, edit, and deploy full-stack web apps with any LLM provider. 19K+ GitHub stars.
TokRepo精选