# Pal MCP Server — Multi-Model AI Gateway for Claude Code > MCP server that lets Claude Code use Gemini, OpenAI, Grok, and Ollama as a unified AI dev team. Features model routing, CLI-to-CLI bridge, and conversation continuity across 7+ providers. ## Install Merge the JSON below into your `.mcp.json`: ## Quick Use Add to your `.mcp.json`: ```json { "mcpServers": { "pal": { "command": "bash", "args": ["-c", "for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \"$p\" ] && exec \"$p\" --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server; done; echo 'uvx not found' >&2; exit 1"], "env": { "GEMINI_API_KEY": "your-gemini-key", "DEFAULT_MODEL": "auto" } } } } ``` Prerequisites: Python 3.10+, Git, uv (`pip install uv`). --- ## Intro Pal MCP Server is a multi-model AI gateway that lets Claude Code, Gemini CLI, and Codex CLI use multiple LLM providers as one unified system, with 11,300+ GitHub stars. It supports 7+ providers — Gemini, OpenAI, Azure OpenAI, X.AI/Grok, OpenRouter, DIAL, and Ollama (local) — with features like model routing, CLI-to-CLI bridging (clink), and conversation continuity across models. When Claude's context resets, other models can "remind" Claude of the full discussion. Best for developers who want to leverage multiple AI models from a single Claude Code session. Works with: Claude Code, Gemini CLI, Codex CLI. Setup time: under 5 minutes. --- ## Core Tools (Enabled by Default) | Tool | Description | |------|-------------| | **chat** | Send messages to any supported model | | **thinkdeep** | Extended reasoning with model selection | | **planner** | Multi-step project planning across models | | **consensus** | Get multiple model opinions and find consensus | | **codereview** | Cross-model code review | | **precommit** | Pre-commit checks using multiple models | | **debug** | Cross-model debugging assistance | | **apilookup** | API documentation lookup via models | | **challenge** | Devil's advocate analysis from another model | | **clink** | CLI-to-CLI bridge — spawn Codex/Gemini CLI subagents | ### Additional Tools (Disabled by Default) Enable by removing from `DISABLED_TOOLS` env var: | Tool | Description | |------|-------------| | **analyze** | Deep code analysis | | **refactor** | Code refactoring suggestions | | **testgen** | Test generation | | **secaudit** | Security auditing | | **docgen** | Documentation generation | | **tracer** | Code flow tracing | ### Supported Providers | Provider | Models | API Key Env Var | |----------|--------|-----------------| | Gemini | gemini-2.5-pro, gemini-2.5-flash | `GEMINI_API_KEY` | | OpenAI | gpt-4o, o3, o4-mini | `OPENAI_API_KEY` | | Azure OpenAI | Any deployed model | `AZURE_OPENAI_*` | | X.AI/Grok | grok-3 | `XAI_API_KEY` | | OpenRouter | 200+ models | `OPENROUTER_API_KEY` | | Ollama | Any local model | (local, no key) | ### Key Feature: clink (CLI-to-CLI Bridge) The `clink` tool lets Claude Code spawn Codex or Gemini CLI as isolated subagents for specific tasks — code reviews, bug hunting, research — without polluting Claude's main context window. Results flow back automatically. ### Key Feature: Conversation Continuity Full context flows across tools and models. When Claude's context resets mid-session, other models retain the conversation history and can bring Claude back up to speed. ### FAQ **Q: What is Pal MCP Server?** A: An MCP server that connects Claude Code to 7+ AI providers (Gemini, OpenAI, Grok, Ollama, etc.), enabling multi-model workflows, cross-model code review, and CLI-to-CLI bridging. **Q: Is Pal MCP Server free?** A: The server itself is free and open source. You need API keys for the model providers you want to use (Gemini has a free tier). **Q: How do I install Pal MCP Server?** A: Add the JSON config to your `.mcp.json` file and set your API keys. Requires Python 3.10+ and uv. --- ## Source & Thanks > Created by [BeehiveInnovations](https://github.com/BeehiveInnovations). Licensed under custom license. > > [pal-mcp-server](https://github.com/BeehiveInnovations/pal-mcp-server) — ⭐ 11,300+ Thank you for building a powerful multi-model gateway for the AI developer community. --- ## 快速使用 将以下 JSON 添加到 `.mcp.json`: ```json { "mcpServers": { "pal": { "command": "bash", "args": ["-c", "for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \"$p\" ] && exec \"$p\" --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server; done; echo 'uvx not found' >&2; exit 1"], "env": { "GEMINI_API_KEY": "你的-gemini-key", "DEFAULT_MODEL": "auto" } } } } ``` 前提条件: Python 3.10+, Git, uv。 --- ## 简介 Pal MCP Server 是一个多模型 AI 网关,让 Claude Code 能使用 Gemini、OpenAI、Grok、Ollama 等 7+ 提供商作为统一的 AI 开发团队,GitHub 星标 11,300+。核心特性包括模型路由、CLI-to-CLI 桥接(clink)和跨模型对话连续性。 适合想在单个 Claude Code 会话中利用多个 AI 模型的开发者。安装时间:不到 5 分钟。 --- ## 核心工具 chat、thinkdeep、planner、consensus、codereview、precommit、debug、apilookup、challenge、clink — 共 10 个默认启用工具。 支持 Gemini、OpenAI、Azure、Grok、OpenRouter、DIAL、Ollama 共 7+ 提供商。 --- ## 来源与感谢 > Created by [BeehiveInnovations](https://github.com/BeehiveInnovations). Licensed under custom license. > > [pal-mcp-server](https://github.com/BeehiveInnovations/pal-mcp-server) — ⭐ 11,300+ --- Source: https://tokrepo.com/en/workflows/09c904b2-4bf7-4f1e-acf5-55cd465b6227 Author: TokRepo精选