ConfigsMar 31, 2026·2 min read

Jan — Offline AI Desktop App with Full Privacy

Jan is an open-source ChatGPT alternative that runs LLMs locally with full privacy. 41.4K+ GitHub stars. Desktop app for Windows/macOS/Linux, OpenAI-compatible API, MCP support. Apache 2.0.

TL;DR
Jan runs open-source LLMs offline on your desktop with full privacy -- no cloud, no API keys, no data leaving your machine.
§01

What it is

Jan is a desktop application that runs large language models locally on your computer. All processing happens on your machine -- no data is sent to cloud servers. It provides a ChatGPT-style interface for chatting with open-source models like Llama, Mistral, and Gemma.

Jan targets users who want AI assistance without privacy concerns. Developers, researchers, and security-conscious teams use it for coding help, document analysis, and brainstorming while keeping all data local.

§02

How it saves time or tokens

Jan eliminates API costs entirely. Once a model is downloaded, every interaction is free. There are no token limits, no rate limits, and no monthly bills. For teams that run hundreds of queries daily, local inference with Jan can save thousands of dollars per year compared to API-based services.

The desktop app handles model management, GPU detection, and inference configuration automatically.

§03

How to use

  1. Download Jan from the official website:
# macOS
brew install --cask jan

# Or download from https://jan.ai/download
  1. Launch Jan and browse the model hub. Download a model that fits your hardware.
  1. Start chatting. The model runs locally on your CPU or GPU.
# In Jan's chat interface:
You: Explain the difference between REST and GraphQL APIs

Jan (Llama 3): REST uses resource-based URLs with HTTP methods.
GraphQL uses a single endpoint with a query language...
§04

Example

# Jan also provides a local API server compatible with OpenAI format
# Start the server in Jan settings, then:

curl http://localhost:1337/v1/chat/completions \
  -H 'Content-Type: application/json' \
  -d '{
    "model": "llama3-8b",
    "messages": [{"role": "user", "content": "Write a Python function to parse CSV files"}],
    "max_tokens": 500
  }'

# Use with any OpenAI-compatible client
import openai
client = openai.OpenAI(base_url='http://localhost:1337/v1', api_key='not-needed')
response = client.chat.completions.create(
    model='llama3-8b',
    messages=[{'role': 'user', 'content': 'Hello'}]
)
§05

Related on TokRepo

§06

Common pitfalls

  • Model quality depends on your hardware. Larger models (13B+ parameters) need 16GB+ RAM or a dedicated GPU. Smaller models (7B) run on most modern laptops.
  • Local models are not as capable as cloud-hosted frontier models (GPT-4, Claude). Expect lower quality for complex reasoning tasks.
  • Initial model downloads are large (4-30GB per model). Ensure sufficient disk space before downloading multiple models.

Frequently Asked Questions

What hardware do I need to run Jan?+

Minimum: 8GB RAM for small models (7B parameters). Recommended: 16GB RAM or a GPU with 8GB+ VRAM for medium models (13B). Large models (70B) need 32GB+ RAM or multiple GPUs.

Is Jan truly private?+

Yes. All inference runs locally on your hardware. No data is sent to any server. Jan works completely offline after the initial model download.

What models can Jan run?+

Jan supports GGUF-format models from the Hugging Face ecosystem. Popular options include Llama 3, Mistral, Gemma, Phi, and CodeLlama. The built-in model hub makes downloading easy.

Does Jan have an API?+

Yes. Jan includes a local API server that is compatible with the OpenAI API format. Any tool or library that supports OpenAI's API can connect to Jan's local server.

How does Jan compare to LM Studio?+

Both are desktop apps for running local LLMs. Jan is open-source (AGPLv3) while LM Studio is proprietary. Feature sets are similar: model management, chat UI, and local API server. Both support GGUF models.

Citations (3)
🙏

Source & Thanks

Created by Jan HQ. Licensed under Apache 2.0. janhq/jan — 41,400+ GitHub stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets