Local LLM
Jan — Open-source ChatGPT Alternative That Runs Offline logo

Jan — 完全离线运行的开源 ChatGPT 替代品

Jan 是 MIT 许可证的桌面应用,像 ChatGPT 一样本地运行大模型。内置模型库、Assistant、插件与 OpenAI 兼容本地服务——相对 LM Studio 的开源替代。

为什么选它

Jan’s thesis: "LM Studio’s experience, but open source and privacy-first". It’s a Tauri-based desktop app for Windows, macOS, and Linux with a familiar ChatGPT-style UI, a built-in model hub, and a local server tab. Users who value open source or who distrust closed-source inference apps have a clear alternative.

Under the hood Jan uses llama.cpp (via a bundled cortex engine) and can also connect to remote endpoints — OpenAI, Anthropic, Groq, or any compatible server. You can run a local model, a remote model, or both simultaneously and switch between them in the chat dropdown. That hybrid mode is subtly useful: fast cloud model for quick questions, local model for sensitive ones.

Trade-offs vs LM Studio: Jan’s GUI is functional but feels younger — fewer niceties (MLX support, explicit quantization picker, slick model browser). Where Jan wins: MIT license, explicit no-telemetry stance, extension SDK, and the cortex engine’s headless CLI for servers.

Quick Start — Install, Download, Chat

Jan ships both the desktop app and a headless engine (cortex) that you can run on Linux servers without the UI. The desktop app calls cortex locally; on a server you can skip the GUI and just run cortex as a system service exposing an OpenAI-compatible API.

# 1. Download the installer from https://jan.ai
#    macOS .dmg, Windows .exe, Linux .AppImage / .deb

# 2. Open Jan:
#    - "Hub" tab → search "Llama 3.2 3B Instruct Q4" → Download
#    - "Chat" tab → select the model → chat offline

# 3. Start the local API server (Jan settings → Local API Server → Start)
#    Listens on http://localhost:1337/v1 with OpenAI shape.

# 4. Point any OpenAI SDK at it
python - <<'PY'
from openai import OpenAI
c = OpenAI(base_url="http://localhost:1337/v1", api_key="jan")
r = c.chat.completions.create(
    model="llama3.2-3b-instruct",
    messages=[{"role":"user","content":"Give me a 2-sentence Jan summary."}],
)
print(r.choices[0].message.content)
PY

# 5. Headless: use Cortex (Jan's engine) on a server without the GUI
curl -s https://raw.githubusercontent.com/janhq/cortex/dev/engine/templates/linux/install.sh | sudo bash
cortex models pull llama3.2:3b-instruct-q4
cortex run llama3.2:3b-instruct-q4   # server on :3928

核心能力

Open-source desktop app

Tauri + React build, MIT licensed. Review the code, fork, self-host. Contrast with LM Studio’s closed-source binary.

Built-in model hub

Curated models with recommended quantizations. One-click download. Covers Llama, Qwen, Mistral, Gemma, DeepSeek, Phi families.

Remote + local endpoints

Connect to OpenAI, Anthropic, Groq, OpenRouter, or any OpenAI-compatible server alongside local models. Switch per-chat.

Assistants + knowledge

Persona-style assistants with system prompts and attached knowledge (PDFs, URLs). Local RAG without extra infra.

Extensions / plugins

Extension SDK for adding tools (web search, code execution, custom integrations). Ecosystem is growing but smaller than text-generation-webui’s.

Privacy-first

No telemetry by default, no account required, all data stays on-device unless you explicitly add a remote endpoint.

对比

 LicenseUX PolishBackendBest Fit
JanthisMIT (open)Good, improvingCortex (llama.cpp-based)OSS-purist desktop users
LM StudioClosed-source freeExcellentllama.cpp + MLXDesktop users who prefer polish over license
OllamaMITCLI-firstllama.cppDevelopers
GPT4AllMITVery goodllama.cpp (modified)Offline-first CPU users

实际用例

01. Open-source ChatGPT replacement

Users who want a familiar ChatGPT UX but reject closed-source desktop inference apps. Jan checks both boxes.

02. Mixed local + cloud chat

Route sensitive questions to a local model, casual ones to Claude or GPT — all in one app. Jan’s endpoint switcher makes this painless.

03. Headless server with cortex

Run cortex on a Linux server without the GUI to expose a local OpenAI-compatible API. Alternative to Ollama on servers with similar simplicity and full open source.

价格与许可

Jan: MIT open source. Free to use commercially. GitHub.

Cortex engine: also MIT. Separate binary for headless / server use. No paid tier.

Hardware cost: same as any llama.cpp-based tool — scales with model size and quantization. 8GB RAM minimum for 3B-7B models.

相关 TokRepo 资产

常见问题

Jan vs LM Studio?+

Feature parity is close. Jan is MIT open source; LM Studio is closed-source free. LM Studio has slightly better UX and MLX support on Apple Silicon. Jan has a published extension SDK and explicit privacy stance. Pick based on which axis matters to you.

Jan vs Ollama?+

Jan is GUI-first with a chat UI; Ollama is CLI/API-first. Both expose OpenAI-compatible endpoints. Many users install both: Jan for interactive chat, Ollama for tool integration.

Can Jan run without internet?+

Yes — after the initial app install and one-time model download. No telemetry or required phone-home. Explicit airgap mode available in settings for sensitive environments.

Does Jan support MLX on Apple Silicon?+

Cortex is adding MLX-like backends; in 2026 Jan primarily uses llama.cpp with Metal. If you want the absolute best speed on Apple Silicon, LM Studio’s MLX or raw MLX gives an edge.

How do I use remote models from Jan?+

Settings → Model Providers → add OpenAI (API key), Anthropic (API key), Groq, OpenRouter, or a custom OpenAI-compatible endpoint. Remote models then appear in the model picker alongside local ones.

同类推荐