# Jan — Run AI Models Locally on Your Desktop > Open-source desktop app to run LLMs offline. Jan supports Llama, Mistral, and Gemma models with one-click download, OpenAI-compatible API, and full privacy. ## Install Save the content below to `.claude/skills/` or append to your `CLAUDE.md`: ## Quick Use 1. Download from [jan.ai](https://jan.ai) (Mac/Windows/Linux) 2. Open Jan → Model Hub → Download a model (e.g., Llama 3.1 8B) 3. Start chatting — fully offline, no API key needed ```bash # Or use the local API (OpenAI-compatible) curl http://localhost:1337/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model": "llama3.1-8b", "messages": [{"role": "user", "content": "Hello"}]}' ``` ## What is Jan? Jan is an open-source desktop application for running AI models locally. It provides a ChatGPT-like interface that works completely offline. Download models with one click, chat privately, and expose a local OpenAI-compatible API for integration with other tools. Your data never leaves your machine. **Answer-Ready**: Jan is an open-source desktop app for running LLMs locally. ChatGPT-like UI, one-click model downloads, OpenAI-compatible local API. Supports Llama, Mistral, Gemma, and GGUF models. Fully offline, complete privacy. 26k+ GitHub stars. **Best for**: Developers and privacy-conscious users wanting local AI. **Works with**: Claude Code (as local backend), Cursor, any OpenAI-compatible tool. **Setup time**: Under 2 minutes. ## Core Features ### 1. One-Click Model Download Built-in model hub with curated models: - Llama 3.1 (8B, 70B) - Mistral 7B, Mixtral - Gemma 2 - Phi-3 - Any GGUF model from HuggingFace ### 2. OpenAI-Compatible API ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:1337/v1", api_key="not-needed") response = client.chat.completions.create( model="llama3.1-8b", messages=[{"role": "user", "content": "Explain quantum computing"}], ) print(response.choices[0].message.content) ``` ### 3. Extensions System - **TensorRT-LLM**: NVIDIA GPU acceleration - **Remote API**: Connect to OpenAI/Anthropic as fallback - **RAG**: Local document Q&A ### 4. Cross-Platform | Platform | GPU Support | |----------|-------------| | macOS | Apple Silicon (Metal) | | Windows | NVIDIA CUDA | | Linux | NVIDIA CUDA, Vulkan | ## Jan vs Alternatives | Feature | Jan | Ollama | LM Studio | |---------|-----|--------|-----------| | GUI | Full desktop app | CLI only | Full desktop app | | API | OpenAI-compatible | OpenAI-compatible | OpenAI-compatible | | Extensions | Plugin system | Limited | No | | Open source | Yes (AGPL-3.0) | Yes | No | | Model format | GGUF | GGUF, safetensors | GGUF | ## FAQ **Q: What hardware do I need?** A: 8GB RAM minimum for 7B models. 16GB+ recommended. Apple Silicon Macs work great with Metal acceleration. **Q: Can I use it as a backend for Claude Code or Cursor?** A: Yes, Jan exposes an OpenAI-compatible API at localhost:1337. Point any tool that supports custom endpoints to it. **Q: Is it truly private?** A: Yes, everything runs locally. No telemetry, no data collection. You can verify — it is open source. ## Source & Thanks > Created by [janhq](https://github.com/janhq). Licensed under AGPL-3.0. > > [janhq/jan](https://github.com/janhq/jan) — 26k+ stars ## 快速使用 下载 Jan 桌面应用 → 一键下载模型 → 离线聊天。 ## 什么是 Jan? Jan 是开源桌面 AI 应用,本地运行 LLM。类 ChatGPT 界面,完全离线,提供 OpenAI 兼容 API。 **一句话总结**:开源本地 AI 桌面应用,一键下载模型离线聊天,OpenAI 兼容 API,支持 Llama/Mistral/Gemma,26k+ stars。 **适合人群**:追求隐私或需要本地 AI 的开发者。 ## 核心功能 ### 1. 一键下载 内置模型中心,Llama、Mistral、Gemma 等一键下载。 ### 2. 本地 API OpenAI 兼容 API,localhost:1337,集成其他工具。 ### 3. 跨平台 Mac(Metal 加速)、Windows(CUDA)、Linux。 ## 常见问题 **Q: 硬件要求?** A: 7B 模型最低 8GB 内存,推荐 16GB+。 **Q: 能给 Claude Code 当后端?** A: 可以,Jan 暴露 OpenAI 兼容 API。 ## 来源与致谢 > [janhq/jan](https://github.com/janhq/jan) — 26k+ stars, AGPL-3.0 --- Source: https://tokrepo.com/en/workflows/1abc2bed-5fef-46fd-9a10-71a639eb26ad Author: Skill Factory