# Cherry Studio Custom Models — BYOK Any LLM Provider > Cherry Studio Custom Models adds any OpenAI-compatible endpoint — proxy, local, or third-party. Mix Claude, GPT, Gemini, DeepSeek, Ollama side-by-side. ## Install Save in your project root: ## Quick Use 1. Open Cherry Studio → Settings → Models → ➕ Add Provider 2. Pick OpenAI Compatible, paste base URL and API key 3. Cherry Studio auto-fetches models via /v1/models — select the ones you want --- ## Intro Cherry Studio Custom Models support lets you add any OpenAI-compatible endpoint as a model — your own LiteLLM proxy, an Ollama instance, OpenRouter, DeepSeek, anything. Multiple models at once, swap mid-conversation, or run side-by-side comparisons. Best for: power users who want one interface across all their LLM providers, including local models. Works with: Cherry Studio 1.x, any OpenAI-compatible endpoint. Setup time: 2 minutes per model. --- ### Add a custom provider Settings → Models → ➕ Add Provider: ``` Provider Type: OpenAI Compatible Name: My LiteLLM Proxy Base URL: https://litellm.acme.internal/v1 API Key: sk-team-acme-xyz ``` Cherry Studio fetches the model list via `/v1/models`. Select the ones you want exposed. ### Common providers | Provider | Base URL | Models | |---|---|---| | OpenRouter | `https://openrouter.ai/api/v1` | 300+ | | Together AI | `https://api.together.xyz/v1` | 200+ | | Groq | `https://api.groq.com/openai/v1` | Fast Llama, Mixtral | | DeepSeek | `https://api.deepseek.com/v1` | DeepSeek-V3, R1, Coder | | Ollama (local) | `http://localhost:11434/v1` | Anything you've pulled | | LM Studio (local) | `http://localhost:1234/v1` | GGUF models | | Your LiteLLM Proxy | `https://your-proxy.com/v1` | Whatever it routes | ### Side-by-side comparison In a chat, click the model selector → ⊕ Add Model. Two (or more) models respond to the same prompt simultaneously. Useful for picking the right model for a workload. ### Switch mid-conversation Click the model name on any message → switch the model used for the next message. Great for "use Sonnet for the hard reasoning, switch to Haiku for the boilerplate replies." ### Config persistence Provider configs are stored in `~/Library/Application Support/CherryStudio/config.json` (encrypted with the OS keychain). Sync via the optional Cherry Studio Sync (encrypted to your passphrase) if you want them across devices. --- ### FAQ **Q: Does it work with Anthropic Claude directly?** A: Cherry Studio supports Anthropic's native API as a built-in provider type (not OpenAI-compatible). Add via Settings → Models → ➕ Add Provider → Anthropic. Same flow as the others. **Q: Can I use a self-hosted LLM?** A: Yes — point to your local Ollama (port 11434) or LM Studio (port 1234) for fully local inference. The OpenAI-compatible adapter handles tool use and streaming for both. **Q: How are API keys stored?** A: Locally only. Cherry Studio uses the OS keychain (macOS Keychain / Windows Credential Manager / Linux Secret Service). Keys never leave your device unless you opt into Cherry Studio Sync (encrypted). --- ## Source & Thanks > Built by [kangfenmao](https://github.com/kangfenmao). Licensed under Apache-2.0. > > [CherryHQ/cherry-studio](https://github.com/CherryHQ/cherry-studio) — ⭐ 18,000+ --- ## 快速使用 1. 打开 Cherry Studio → 设置 → 模型 → ➕ 添加 Provider 2. 选 OpenAI Compatible,粘贴 base URL 和 API key 3. Cherry Studio 通过 /v1/models 自动拉模型列表,选你要的 --- ## 简介 Cherry Studio 自定义模型支持让你把任何 OpenAI 兼容端点加为模型 —— 你的 LiteLLM proxy、Ollama 实例、OpenRouter、DeepSeek 都行。多个模型同时存在,对话中途切换,或并排对比。适合想用一个界面覆盖所有 LLM provider(包括本地模型)的高阶用户。兼容 Cherry Studio 1.x 和任何 OpenAI 兼容端点。每个模型装机时间 2 分钟。 --- ### 加自定义 provider 设置 → 模型 → ➕ 添加 Provider: ``` Provider 类型:OpenAI Compatible 名字:My LiteLLM Proxy Base URL:https://litellm.acme.internal/v1 API Key:sk-team-acme-xyz ``` Cherry Studio 通过 `/v1/models` 拉模型列表,选要暴露的。 ### 常见 provider | Provider | Base URL | 模型 | |---|---|---| | OpenRouter | `https://openrouter.ai/api/v1` | 300+ | | Together AI | `https://api.together.xyz/v1` | 200+ | | Groq | `https://api.groq.com/openai/v1` | 快的 Llama / Mixtral | | DeepSeek | `https://api.deepseek.com/v1` | DeepSeek-V3 / R1 / Coder | | Ollama(本地) | `http://localhost:11434/v1` | 你拉过的任何模型 | | LM Studio(本地) | `http://localhost:1234/v1` | GGUF 模型 | | 你的 LiteLLM Proxy | `https://your-proxy.com/v1` | 它路由的任何模型 | ### 并排对比 在聊天里点模型选择器 → ⊕ 添加模型。两个(或多个)模型同时响应同一 prompt。给某个工作负载挑模型有用。 ### 对话中途切换 点任何消息上的模型名 → 切下一条消息用的模型。「难推理用 Sonnet,样板回复切 Haiku」很合适。 ### 配置持久化 Provider 配置存在 `~/Library/Application Support/CherryStudio/config.json`(用 OS keychain 加密)。想跨设备就用可选的 Cherry Studio Sync(用你的密码加密)。 --- ### FAQ **Q: 能直接配 Anthropic Claude 吗?** A: 能 —— Cherry Studio 支持 Anthropic 原生 API 作为内置 provider 类型(不是 OpenAI 兼容)。设置 → 模型 → ➕ 添加 Provider → Anthropic。流程跟其他一样。 **Q: 能用自托管 LLM 吗?** A: 能 —— 指向本地 Ollama(端口 11434)或 LM Studio(端口 1234)做完全本地推理。OpenAI 兼容适配器对两者都处理工具使用和流式。 **Q: API key 怎么存?** A: 只本地存。Cherry Studio 用 OS keychain(macOS Keychain / Windows Credential Manager / Linux Secret Service)。除非启用 Cherry Studio Sync(加密)否则 key 不离开你的设备。 --- ## 来源与感谢 > Built by [kangfenmao](https://github.com/kangfenmao). Licensed under Apache-2.0. > > [CherryHQ/cherry-studio](https://github.com/CherryHQ/cherry-studio) — ⭐ 18,000+ --- Source: https://tokrepo.com/en/workflows/cherry-studio-custom-models-byok-any-llm-provider Author: Cherry Studio