# Phoenix Tracing Quickstart — OpenInference Tracer Setup > Phoenix instruments OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI via OpenInference. Local UI or Arize cloud. No per-call code changes. ## Install Copy the content below into your project: ## Quick Use 1. `pip install arize-phoenix openinference-instrumentation-openai` 2. `phoenix serve` (or use Arize cloud endpoint) 3. `OpenAIInstrumentor().instrument()` — every call now traces --- ## Intro Phoenix is the open-source observability companion to Arize AX — drop the OpenInference tracer in once and every OpenAI / Anthropic / LangChain / LlamaIndex / CrewAI / DSPy call gets a trace span automatically with prompts, completions, latency, and token cost. View traces in the local Phoenix UI (port 6006) or send them to Arize cloud. Best for: debugging multi-step agents, finding which retrieval step poisoned the answer, comparing prompt versions side-by-side. Works with: any Python LLM stack via OpenInference instrumentation. Setup time: 2 minutes. --- ### Install + start local Phoenix ```bash pip install arize-phoenix openinference-instrumentation-openai openinference-instrumentation-langchain phoenix serve # starts UI on http://localhost:6006 ``` ### Auto-instrument OpenAI ```python from phoenix.otel import register from openinference.instrumentation.openai import OpenAIInstrumentor tracer_provider = register(project_name="my-rag-app", endpoint="http://localhost:6006/v1/traces") OpenAIInstrumentor().instrument(tracer_provider=tracer_provider) # That's it. Now every OpenAI call traces automatically: from openai import OpenAI client = OpenAI() client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Explain quantum entanglement"}], ) # Open localhost:6006 — your trace appears with prompt, completion, latency, cost. ``` ### LangChain + LlamaIndex ```python from openinference.instrumentation.langchain import LangChainInstrumentor from openinference.instrumentation.llama_index import LlamaIndexInstrumentor LangChainInstrumentor().instrument(tracer_provider=tracer_provider) LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider) ``` ### Send to Arize cloud instead ```python tracer_provider = register( project_name="my-rag-app", endpoint="https://otlp.arize.com/v1/traces", headers={"api_key": ARIZE_API_KEY, "space_id": ARIZE_SPACE_ID}, ) ``` ### Trace span attributes (OpenInference standard) | Attribute | Example | |---|---| | `llm.model_name` | `gpt-4o` | | `llm.token_count.prompt` | `847` | | `llm.token_count.completion` | `213` | | `llm.input_messages.0.message.content` | full prompt text | | `output.value` | model output | | `retrieval.documents.*.document.content` | chunks fetched in RAG | --- ### FAQ **Q: Phoenix vs Langfuse vs LangSmith?** A: Phoenix is OpenInference-native — vendor-neutral OTel attributes that any backend can read. Langfuse has stronger prompt management and self-host story. LangSmith is best if you live in LangChain. Phoenix is the choice when you want OTel and may switch backends. **Q: Does Phoenix need a database?** A: Local mode uses SQLite under ~/.phoenix. Production self-host swaps to Postgres via PHOENIX_SQL_DATABASE_URL. Arize cloud handles persistence for you. SQLite is fine for solo dev with <10K traces. **Q: Can I see traces from a notebook?** A: Yes — `phoenix.launch_app()` opens the UI inline as a Jupyter widget or new tab. Combine with `phoenix.evals` to run LLM-as-judge evals and view them next to traces. --- ## Source & Thanks > Built by [Arize AI](https://github.com/Arize-ai). Licensed under Apache-2.0. > > [Arize-ai/phoenix](https://github.com/Arize-ai/phoenix) — ⭐ 4,500+ --- ## 快速使用 1. `pip install arize-phoenix openinference-instrumentation-openai` 2. `phoenix serve`(或用 Arize 云端 endpoint) 3. `OpenAIInstrumentor().instrument()` —— 之后每次调用自动 trace --- ## 简介 Phoenix 是 Arize AX 的开源观测同伴 —— OpenInference tracer 装一次,OpenAI / Anthropic / LangChain / LlamaIndex / CrewAI / DSPy 每次调用自动产生 trace span,含 prompt、completion、延迟、token 成本。在本地 Phoenix UI(6006 端口)看,或推到 Arize 云端。适合调试多步 agent、找出哪个 retrieval 步骤毒化了答案、并排对比 prompt 版本。兼容任何走 OpenInference 注入的 Python LLM 栈。装机时间 2 分钟。 --- ### 安装 + 起本地 Phoenix ```bash pip install arize-phoenix openinference-instrumentation-openai openinference-instrumentation-langchain phoenix serve # UI 起在 http://localhost:6006 ``` ### 自动注入 OpenAI ```python from phoenix.otel import register from openinference.instrumentation.openai import OpenAIInstrumentor tracer_provider = register(project_name="my-rag-app", endpoint="http://localhost:6006/v1/traces") OpenAIInstrumentor().instrument(tracer_provider=tracer_provider) # 这就完了。之后每次 OpenAI 调用都自动 trace: from openai import OpenAI client = OpenAI() client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "解释量子纠缠"}], ) # 打开 localhost:6006 —— trace 出现,含 prompt、completion、延迟、成本。 ``` ### LangChain + LlamaIndex ```python from openinference.instrumentation.langchain import LangChainInstrumentor from openinference.instrumentation.llama_index import LlamaIndexInstrumentor LangChainInstrumentor().instrument(tracer_provider=tracer_provider) LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider) ``` ### 改推到 Arize 云端 ```python tracer_provider = register( project_name="my-rag-app", endpoint="https://otlp.arize.com/v1/traces", headers={"api_key": ARIZE_API_KEY, "space_id": ARIZE_SPACE_ID}, ) ``` ### Trace span 属性(OpenInference 标准) | 属性 | 例子 | |---|---| | `llm.model_name` | `gpt-4o` | | `llm.token_count.prompt` | `847` | | `llm.token_count.completion` | `213` | | `llm.input_messages.0.message.content` | 完整 prompt 文本 | | `output.value` | 模型输出 | | `retrieval.documents.*.document.content` | RAG 拉到的 chunk | --- ### FAQ **Q: Phoenix vs Langfuse vs LangSmith?** A: Phoenix 原生 OpenInference —— 厂商中立的 OTel 属性,任何后端都能读。Langfuse prompt 管理和自托管更强。LangSmith 在 LangChain 里活的人最适合。要 OTel 又可能切后端就选 Phoenix。 **Q: Phoenix 需要数据库吗?** A: 本地模式用 ~/.phoenix 下的 SQLite。生产自托管通过 PHOENIX_SQL_DATABASE_URL 切 Postgres。Arize 云端帮你处理持久化。单人开发 <1 万 trace 的话 SQLite 够用。 **Q: 能在 notebook 里看 trace 吗?** A: 能 —— `phoenix.launch_app()` 内嵌 UI 当 Jupyter widget 或开新标签页。配 `phoenix.evals` 跑 LLM-as-judge 评估,就在 trace 旁边看。 --- ## 来源与感谢 > Built by [Arize AI](https://github.com/Arize-ai). Licensed under Apache-2.0. > > [Arize-ai/phoenix](https://github.com/Arize-ai/phoenix) — ⭐ 4,500+ --- Source: https://tokrepo.com/en/workflows/phoenix-tracing-quickstart-openinference-tracer-setup Author: Arize AI