Cette page est affichée en anglais. Une traduction française est en cours.
KnowledgeMay 8, 2026·4 min de lecture

Phoenix Tracing Quickstart — OpenInference Tracer Setup

Phoenix instruments OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI via OpenInference. Local UI or Arize cloud. No per-call code changes.

Arize AI
Arize AI · Community
Prêt pour agents

Cet actif peut être lu et installé directement par les agents

TokRepo expose une commande CLI universelle, un contrat d'installation, le metadata JSON, un plan selon l'adaptateur et le contenu raw pour aider les agents à juger l'adaptation, le risque et les prochaines actions.

Stage only · 15/100Stage only
Surface agent
Tout agent MCP/CLI
Type
Knowledge
Installation
Stage only
Confiance
Confiance : New
Point d'entrée
Asset
Commande CLI universelle
npx tokrepo install 0ba1d7ad-c101-4f54-be76-388d98ddaf40
Introduction

Phoenix is the open-source observability companion to Arize AX — drop the OpenInference tracer in once and every OpenAI / Anthropic / LangChain / LlamaIndex / CrewAI / DSPy call gets a trace span automatically with prompts, completions, latency, and token cost. View traces in the local Phoenix UI (port 6006) or send them to Arize cloud. Best for: debugging multi-step agents, finding which retrieval step poisoned the answer, comparing prompt versions side-by-side. Works with: any Python LLM stack via OpenInference instrumentation. Setup time: 2 minutes.


Install + start local Phoenix

pip install arize-phoenix openinference-instrumentation-openai openinference-instrumentation-langchain
phoenix serve  # starts UI on http://localhost:6006

Auto-instrument OpenAI

from phoenix.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentor

tracer_provider = register(project_name="my-rag-app", endpoint="http://localhost:6006/v1/traces")
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

# That's it. Now every OpenAI call traces automatically:
from openai import OpenAI
client = OpenAI()
client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Explain quantum entanglement"}],
)
# Open localhost:6006 — your trace appears with prompt, completion, latency, cost.

LangChain + LlamaIndex

from openinference.instrumentation.langchain import LangChainInstrumentor
from openinference.instrumentation.llama_index import LlamaIndexInstrumentor

LangChainInstrumentor().instrument(tracer_provider=tracer_provider)
LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)

Send to Arize cloud instead

tracer_provider = register(
    project_name="my-rag-app",
    endpoint="https://otlp.arize.com/v1/traces",
    headers={"api_key": ARIZE_API_KEY, "space_id": ARIZE_SPACE_ID},
)

Trace span attributes (OpenInference standard)

Attribute Example
llm.model_name gpt-4o
llm.token_count.prompt 847
llm.token_count.completion 213
llm.input_messages.0.message.content full prompt text
output.value model output
retrieval.documents.*.document.content chunks fetched in RAG

FAQ

Q: Phoenix vs Langfuse vs LangSmith? A: Phoenix is OpenInference-native — vendor-neutral OTel attributes that any backend can read. Langfuse has stronger prompt management and self-host story. LangSmith is best if you live in LangChain. Phoenix is the choice when you want OTel and may switch backends.

Q: Does Phoenix need a database? A: Local mode uses SQLite under ~/.phoenix. Production self-host swaps to Postgres via PHOENIX_SQL_DATABASE_URL. Arize cloud handles persistence for you. SQLite is fine for solo dev with <10K traces.

Q: Can I see traces from a notebook? A: Yes — phoenix.launch_app() opens the UI inline as a Jupyter widget or new tab. Combine with phoenix.evals to run LLM-as-judge evals and view them next to traces.


Quick Use

  1. pip install arize-phoenix openinference-instrumentation-openai
  2. phoenix serve (or use Arize cloud endpoint)
  3. OpenAIInstrumentor().instrument() — every call now traces

Intro

Phoenix is the open-source observability companion to Arize AX — drop the OpenInference tracer in once and every OpenAI / Anthropic / LangChain / LlamaIndex / CrewAI / DSPy call gets a trace span automatically with prompts, completions, latency, and token cost. View traces in the local Phoenix UI (port 6006) or send them to Arize cloud. Best for: debugging multi-step agents, finding which retrieval step poisoned the answer, comparing prompt versions side-by-side. Works with: any Python LLM stack via OpenInference instrumentation. Setup time: 2 minutes.


Install + start local Phoenix

pip install arize-phoenix openinference-instrumentation-openai openinference-instrumentation-langchain
phoenix serve  # starts UI on http://localhost:6006

Auto-instrument OpenAI

from phoenix.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentor

tracer_provider = register(project_name="my-rag-app", endpoint="http://localhost:6006/v1/traces")
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

# That's it. Now every OpenAI call traces automatically:
from openai import OpenAI
client = OpenAI()
client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Explain quantum entanglement"}],
)
# Open localhost:6006 — your trace appears with prompt, completion, latency, cost.

LangChain + LlamaIndex

from openinference.instrumentation.langchain import LangChainInstrumentor
from openinference.instrumentation.llama_index import LlamaIndexInstrumentor

LangChainInstrumentor().instrument(tracer_provider=tracer_provider)
LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)

Send to Arize cloud instead

tracer_provider = register(
    project_name="my-rag-app",
    endpoint="https://otlp.arize.com/v1/traces",
    headers={"api_key": ARIZE_API_KEY, "space_id": ARIZE_SPACE_ID},
)

Trace span attributes (OpenInference standard)

Attribute Example
llm.model_name gpt-4o
llm.token_count.prompt 847
llm.token_count.completion 213
llm.input_messages.0.message.content full prompt text
output.value model output
retrieval.documents.*.document.content chunks fetched in RAG

FAQ

Q: Phoenix vs Langfuse vs LangSmith? A: Phoenix is OpenInference-native — vendor-neutral OTel attributes that any backend can read. Langfuse has stronger prompt management and self-host story. LangSmith is best if you live in LangChain. Phoenix is the choice when you want OTel and may switch backends.

Q: Does Phoenix need a database? A: Local mode uses SQLite under ~/.phoenix. Production self-host swaps to Postgres via PHOENIX_SQL_DATABASE_URL. Arize cloud handles persistence for you. SQLite is fine for solo dev with <10K traces.

Q: Can I see traces from a notebook? A: Yes — phoenix.launch_app() opens the UI inline as a Jupyter widget or new tab. Combine with phoenix.evals to run LLM-as-judge evals and view them next to traces.


Source & Thanks

Built by Arize AI. Licensed under Apache-2.0.

Arize-ai/phoenix — ⭐ 4,500+

🙏

Source et remerciements

Built by Arize AI. Licensed under Apache-2.0.

Arize-ai/phoenix — ⭐ 4,500+

Fil de discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires