Scripts2026年3月29日·1 分钟阅读

Ollama — Run LLMs Locally

Run large language models locally on your machine. Supports Llama 3, Mistral, Gemma, Phi, and dozens more. One-command install, OpenAI-compatible API.

TO
TokRepo精选 · Community
快速使用

先拿来用,再决定要不要深挖

这里应该同时让用户和 Agent 知道第一步该复制什么、安装什么、落到哪里。

# Install
curl -fsSL https://ollama.com/install.sh | sh

# Run a model
ollama run llama3.1

介绍

Ollama makes it dead simple to run open-source LLMs locally. Download and run Llama 3, Mistral, Gemma, Phi, CodeLlama, and 100+ other models with a single command. Includes an OpenAI-compatible API for easy integration.

Best for: Local AI development, privacy-sensitive workloads, offline coding, model experimentation Works with: Any tool supporting OpenAI API format


Popular Models

Model Size Use Case
llama3.1:8b 4.7 GB General purpose
codellama:13b 7.4 GB Code generation
mistral:7b 4.1 GB Fast, capable
gemma2:9b 5.4 GB Google's open model
phi3:mini 2.3 GB Small & efficient
deepseek-coder-v2 8.9 GB Code specialist

API

curl http://localhost:11434/api/chat -d '{
  "model": "llama3.1",
  "messages": [{"role": "user", "content": "Hello!"}]
}'

OpenAI-compatible endpoint at http://localhost:11434/v1/.


🙏

来源与感谢

Created by Ollama. Licensed under MIT. ollama/ollama — 120K+ GitHub stars

相关资产