# Confluent MCP — Kafka + Flink Tools via MCP > Confluent MCP Server exposes 50+ Confluent tools (Kafka, Flink SQL, Schema Registry and more) to MCP clients, with a config-driven setup via npx. ## Install Merge the JSON below into your `.mcp.json`: ## Quick Use 1. Generate a starter config: ```bash npx @confluentinc/mcp-confluent --init-config ``` 2. Edit `config.yaml`, then run: ```bash npx @confluentinc/mcp-confluent --config ./config.yaml ``` 3. Optional: list tools: ```bash npx -y @confluentinc/mcp-confluent --list-tools ``` ## Intro Confluent MCP Server exposes 50+ Confluent tools (Kafka, Flink SQL, Schema Registry and more) to MCP clients, with a config-driven setup via npx. - **Best for:** teams who want agents to inspect/operate Kafka + Confluent Cloud with a controlled allow/block list - **Works with:** Node.js (per README); Confluent Cloud/Local; MCP clients (Claude Desktop/Code, Cursor, VS Code, Gemini CLI) - **Setup time:** 15–30 minutes ## Practical Notes - Quant: the README describes **50+ tools** across Kafka, Flink SQL, Schema Registry and more. - Quant: you can validate tool surface quickly with `--list-tools` before wiring it into an MCP client. ## A safe integration pattern The fastest way to get burned is to give an agent a giant, write-capable surface area on day 1. Instead: 1. **Start with discovery-only tools**: list clusters/topics, read configs, fetch schemas. 2. **Add a narrow allow-list** for write actions (produce, create topics) only after you have a human review loop. 3. **Version and review config.yaml** like code: PRs, diffs, and environment scoping. ## What to measure - Time to answer “where is this message coming from?” before/after MCP adoption. - Number of manual CLI steps replaced by agent tool calls (start with 5 per incident). ### FAQ **Q: Do I need to install globally?** A: No. The README shows running via `npx` with a config file. **Q: How do I restrict actions?** A: Use allow/block lists (see CLI options) and start with read-only exploration. **Q: What is a good first task?** A: List environments/clusters/topics, then inspect one schema and consumer group flow end-to-end. ## Source & Thanks > Source: https://github.com/confluentinc/mcp-confluent > License: MIT > GitHub stars: 152 · forks: 50 --- ## 快速使用 1. 生成初始配置: ```bash npx @confluentinc/mcp-confluent --init-config ``` 2. 编辑 `config.yaml` 后启动: ```bash npx @confluentinc/mcp-confluent --config ./config.yaml ``` 3. 可选:列出可用工具: ```bash npx -y @confluentinc/mcp-confluent --list-tools ``` ## 简介 Confluent MCP Server 将 Kafka、Flink SQL、Schema Registry 等 50+ Confluent 工具暴露给 MCP 客户端;用 npx 生成并加载 config.yaml 即可启动。 - **适合谁:** 希望让 Agent 以“可控白/黑名单”的方式读写 Kafka/Confluent Cloud 的团队 - **可搭配:** Node.js(见 README);Confluent Cloud/Local;MCP 客户端(Claude Desktop/Code、Cursor、VS Code、Gemini CLI 等) - **准备时间:** 15–30 分钟 ## 实战建议 - 量化信息:README 描述了覆盖 Kafka、Flink SQL、Schema Registry 等的 **50+ tools**。 - 量化信息:可先用 `--list-tools` 校验工具面,再接入 MCP 客户端,减少踩坑成本。 ## 更安全的接入姿势 最容易翻车的做法:第一天就把“可写的巨大工具面”交给 Agent。更稳妥的方式: 1. **先只读探索**:列出集群/Topic、读取配置、拉取 Schema。 2. **写操作走窄白名单**:produce / create topic 等动作先接入人工评审流程再逐步放开。 3. **把 config.yaml 当代码管理**:走 PR、做 diff、按环境拆分,避免一份配置通吃所有环境。 ## 建议量化指标 - 接入前后,“定位消息来源/异常 Topic”的耗时变化。 - 每次故障中,手工 CLI 步骤被 Agent tool call 替代的数量(先以每次 5 步为目标)。 ### FAQ **一定要全局安装吗?** 答:不需要。README 的示例就是用 `npx` 搭配配置文件运行。 **怎么限制危险操作?** 答:用 allow/block 列表,并从只读探索开始逐步放开。 **第一件事做什么?** 答:先列出环境/集群/Topic,再选一个业务链路把 Schema→生产→消费走通。 ## 来源与感谢 > Source: https://github.com/confluentinc/mcp-confluent > License: MIT > GitHub stars: 152 · forks: 50 --- Source: https://tokrepo.com/en/workflows/confluent-mcp-kafka-flink-tools-via-mcp Author: MCP Hub