Scripts2026年3月29日·1 分钟阅读

Ollama — Run LLMs Locally

Run large language models locally on your machine. Supports Llama 3, Mistral, Gemma, Phi, and dozens more. One-command install, OpenAI-compatible API.

介绍

Ollama makes it dead simple to run open-source LLMs locally. Download and run Llama 3, Mistral, Gemma, Phi, CodeLlama, and 100+ other models with a single command. Includes an OpenAI-compatible API for easy integration.

Best for: Local AI development, privacy-sensitive workloads, offline coding, model experimentation Works with: Any tool supporting OpenAI API format


Popular Models

Model Size Use Case
llama3.1:8b 4.7 GB General purpose
codellama:13b 7.4 GB Code generation
mistral:7b 4.1 GB Fast, capable
gemma2:9b 5.4 GB Google's open model
phi3:mini 2.3 GB Small & efficient
deepseek-coder-v2 8.9 GB Code specialist

API

curl http://localhost:11434/api/chat -d '{
  "model": "llama3.1",
  "messages": [{"role": "user", "content": "Hello!"}]
}'

OpenAI-compatible endpoint at http://localhost:11434/v1/.


FAQ

Q: What is Ollama? A: Run large language models locally on your machine. Supports Llama 3, Mistral, Gemma, Phi, and dozens more. One-command install, OpenAI-compatible API.

Q: How do I install Ollama? A: Check the Quick Use section above for step-by-step installation instructions. Most assets can be set up in under 2 minutes.

🙏

来源与感谢

Created by Ollama. Licensed under MIT. ollama/ollama — 120K+ GitHub stars

讨论

登录后参与讨论。
还没有评论,来写第一条吧。

相关资产