# GoModel — OpenAI-Compatible AI Gateway > GoModel is a lightweight AI gateway in Go that exposes a unified OpenAI-compatible API across many providers, plus logging/analytics for cost and usage. ## Install Copy the content below into your project: ## Quick Use ```bash docker run --rm -p 8080:8080 \ -e OPENAI_API_KEY="your-openai-key" \ enterpilot/gomodel curl http://localhost:8080/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model":"gpt-5-chat-latest","messages":[{"role":"user","content":"Hello!"}]}' ``` ## Intro GoModel is a lightweight AI gateway in Go that exposes a unified OpenAI-compatible API across many providers, plus logging/analytics for cost and usage. - **Best for:** Teams standardizing LLM access, routing, and cost tracking behind one gateway - **Works with:** Docker; OpenAI-compatible clients/SDKs; provider API keys via env vars (per README) - **Setup time:** 5–20 minutes ## Practical Notes - GitHub: 865 stars · 69 forks; pushed 2026-05-12 (verified via GitHub API). - README shows `docker run -p 8080:8080` and a first-call example to `/v1/chat/completions`. - README lists multiple providers (OpenAI, Anthropic, Gemini, DeepSeek, Groq, OpenRouter, Ollama, vLLM, Bedrock, etc.). ## Main Operational checklist for gateways: 1. **Prefer env files in production.** README warns against passing secrets via `-e` on the command line. 2. **Log request IDs.** Keep a request-id header in your clients so you can trace failures end-to-end. 3. **Start with one provider + one model.** Add routing only after you have stable monitoring and quotas. 4. **Pin “model allowlists.”** Expose only the models you approve (per README provider-model configuration patterns). Treat the gateway as infrastructure: version it, monitor it, and keep credentials scoped and rotated. ### FAQ **Q: Is it only for OpenAI?** A: No—README says it provides an OpenAI-compatible API across many providers depending on credentials you set. **Q: Do I need to pass all keys?** A: No—README says it detects available providers from the credentials you supply (at least one). **Q: How do I avoid leaking secrets?** A: Use `--env-file` instead of `-e` and keep `.env` out of git history. ## Source & Thanks > Source: https://github.com/ENTERPILOT/GoModel > License: MIT > GitHub stars: 865 · forks: 55 --- ## 快速使用 ```bash docker run --rm -p 8080:8080 \ -e OPENAI_API_KEY="your-openai-key" \ enterpilot/gomodel curl http://localhost:8080/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model":"gpt-5-chat-latest","messages":[{"role":"user","content":"Hello!"}]}' ``` ## 简介 GoModel 是用 Go 写的轻量 AI 网关:用一套 OpenAI 兼容 API 接入多家模型供应商,并提供日志/分析面板,便于做路由、限流与成本统计,也方便快速定位失败请求与重试链路。 - **适合谁:** 用一个网关统一接入、路由与成本统计的团队 - **可搭配:** Docker;OpenAI 兼容的客户端/SDK;通过环境变量配置各家 API key(见 README) - **准备时间:** 5–20 分钟 ## 实战建议 - GitHub:865 stars · 69 forks;最近更新 2026-05-12(GitHub API 验证)。 - README 给出 `docker run -p 8080:8080` 以及 `/v1/chat/completions` 的首个请求示例。 - README 列出多家供应商(OpenAI/Anthropic/Gemini/DeepSeek/Groq/OpenRouter/Ollama/vLLM/Bedrock 等)。 ## 主要内容 网关上线前的检查清单: 1. **生产优先用 env-file。** README 提醒不要把密钥直接写在命令行 `-e`(可能泄露到历史与进程列表)。 2. **记录 request-id。** 客户端固定带 request-id,方便端到端排错。 3. **先 1 家供应商、1 个模型跑通。** 监控与配额稳定后再逐步加路由策略。 4. **固定模型白名单。** 只暴露你批准的模型集合(README 有 provider/model 配置模式)。 把网关当基础设施管理:版本化、监控化、凭据最小化并定期轮换。 ### FAQ **只能接 OpenAI 吗?** 答:不是。README 表示可通过不同凭据接入多家供应商,但统一走 OpenAI 兼容 API。 **需要把所有 key 都配上吗?** 答:不需要。README 表示只要提供至少一组凭据,网关会自动识别可用的供应商。 **怎么避免密钥泄露?** 答:用 `--env-file` 替代命令行 `-e`,并确保 `.env` 不进入 git 历史。 ## 来源与感谢 > Source: https://github.com/ENTERPILOT/GoModel > License: MIT > GitHub stars: 865 · forks: 55 --- Source: https://tokrepo.com/en/workflows/gomodel-openai-compatible-ai-gateway Author: Script Depot