Configs2026年4月2日·1 分钟阅读
Ollama — Run LLMs Locally with One Command
Get up and running with Llama, Mistral, Gemma, DeepSeek and 100+ models locally. Simple CLI, OpenAI-compatible API. 95K+ stars.
TO
TokRepo精选 · Community
快速使用
先拿来用,再决定要不要深挖
这里应该同时让用户和 Agent 知道第一步该复制什么、安装什么、落到哪里。
```bash
# Install (macOS/Linux)
curl -fsSL https://ollama.com/install.sh | sh
# Run a model
ollama run llama3.2
ollama run deepseek-r1
ollama run gemma2
```
```bash
# Use the OpenAI-compatible API
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "llama3.2", "messages": [{"role": "user", "content": "Hello!"}]}'
```
Also available on Windows (installer) and Docker: `docker run -d -p 11434:11434 ollama/ollama`
🙏
来源与感谢
- GitHub: [ollama/ollama](https://github.com/ollama/ollama)
- License: MIT
- Stars: 95,000+
- Maintainer: Ollama team
Thanks to the Ollama team for democratizing local LLM access, making it as easy to run a language model as it is to run a Docker container.
讨论
登录后参与讨论。
还没有评论,来写第一条吧。
相关资产
LaVague — Natural Language Web Automation
Give a text objective, LaVague drives the browser to accomplish it. Large Action Model framework for web agents. 6.3K+ stars.
TokRepo精选
Trae Agent — AI Coding Agent by ByteDance
Open-source autonomous coding agent for software engineering tasks. Multi-provider LLM support. By ByteDance. 11K+ stars.
TokRepo精选
bolt.diy — AI Full-Stack App Builder, Any LLM
Community fork of Bolt.new. Prompt, edit, and deploy full-stack web apps with any LLM provider. 19K+ GitHub stars.
TokRepo精选