Esta página se muestra en inglés. Una traducción al español está en curso.
KnowledgeMay 8, 2026·4 min de lectura

Phoenix Tracing Quickstart — OpenInference Tracer Setup

Phoenix instruments OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI via OpenInference. Local UI or Arize cloud. No per-call code changes.

Listo para agents

Este activo puede ser leído e instalado directamente por agents

TokRepo expone un comando CLI universal, contrato de instalación, metadata JSON, plan según adaptador y contenido raw para que los agents evalúen compatibilidad, riesgo y próximos pasos.

Stage only · 15/100Stage only
Superficie agent
Cualquier agent MCP/CLI
Tipo
Knowledge
Instalación
Stage only
Confianza
Confianza: New
Entrada
Asset
Comando CLI universal
npx tokrepo install 0ba1d7ad-c101-4f54-be76-388d98ddaf40
Introducción

Phoenix is the open-source observability companion to Arize AX — drop the OpenInference tracer in once and every OpenAI / Anthropic / LangChain / LlamaIndex / CrewAI / DSPy call gets a trace span automatically with prompts, completions, latency, and token cost. View traces in the local Phoenix UI (port 6006) or send them to Arize cloud. Best for: debugging multi-step agents, finding which retrieval step poisoned the answer, comparing prompt versions side-by-side. Works with: any Python LLM stack via OpenInference instrumentation. Setup time: 2 minutes.


Install + start local Phoenix

pip install arize-phoenix openinference-instrumentation-openai openinference-instrumentation-langchain
phoenix serve  # starts UI on http://localhost:6006

Auto-instrument OpenAI

from phoenix.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentor

tracer_provider = register(project_name="my-rag-app", endpoint="http://localhost:6006/v1/traces")
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

# That's it. Now every OpenAI call traces automatically:
from openai import OpenAI
client = OpenAI()
client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Explain quantum entanglement"}],
)
# Open localhost:6006 — your trace appears with prompt, completion, latency, cost.

LangChain + LlamaIndex

from openinference.instrumentation.langchain import LangChainInstrumentor
from openinference.instrumentation.llama_index import LlamaIndexInstrumentor

LangChainInstrumentor().instrument(tracer_provider=tracer_provider)
LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)

Send to Arize cloud instead

tracer_provider = register(
    project_name="my-rag-app",
    endpoint="https://otlp.arize.com/v1/traces",
    headers={"api_key": ARIZE_API_KEY, "space_id": ARIZE_SPACE_ID},
)

Trace span attributes (OpenInference standard)

Attribute Example
llm.model_name gpt-4o
llm.token_count.prompt 847
llm.token_count.completion 213
llm.input_messages.0.message.content full prompt text
output.value model output
retrieval.documents.*.document.content chunks fetched in RAG

FAQ

Q: Phoenix vs Langfuse vs LangSmith? A: Phoenix is OpenInference-native — vendor-neutral OTel attributes that any backend can read. Langfuse has stronger prompt management and self-host story. LangSmith is best if you live in LangChain. Phoenix is the choice when you want OTel and may switch backends.

Q: Does Phoenix need a database? A: Local mode uses SQLite under ~/.phoenix. Production self-host swaps to Postgres via PHOENIX_SQL_DATABASE_URL. Arize cloud handles persistence for you. SQLite is fine for solo dev with <10K traces.

Q: Can I see traces from a notebook? A: Yes — phoenix.launch_app() opens the UI inline as a Jupyter widget or new tab. Combine with phoenix.evals to run LLM-as-judge evals and view them next to traces.


Quick Use

  1. pip install arize-phoenix openinference-instrumentation-openai
  2. phoenix serve (or use Arize cloud endpoint)
  3. OpenAIInstrumentor().instrument() — every call now traces

Intro

Phoenix is the open-source observability companion to Arize AX — drop the OpenInference tracer in once and every OpenAI / Anthropic / LangChain / LlamaIndex / CrewAI / DSPy call gets a trace span automatically with prompts, completions, latency, and token cost. View traces in the local Phoenix UI (port 6006) or send them to Arize cloud. Best for: debugging multi-step agents, finding which retrieval step poisoned the answer, comparing prompt versions side-by-side. Works with: any Python LLM stack via OpenInference instrumentation. Setup time: 2 minutes.


Install + start local Phoenix

pip install arize-phoenix openinference-instrumentation-openai openinference-instrumentation-langchain
phoenix serve  # starts UI on http://localhost:6006

Auto-instrument OpenAI

from phoenix.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentor

tracer_provider = register(project_name="my-rag-app", endpoint="http://localhost:6006/v1/traces")
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

# That's it. Now every OpenAI call traces automatically:
from openai import OpenAI
client = OpenAI()
client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Explain quantum entanglement"}],
)
# Open localhost:6006 — your trace appears with prompt, completion, latency, cost.

LangChain + LlamaIndex

from openinference.instrumentation.langchain import LangChainInstrumentor
from openinference.instrumentation.llama_index import LlamaIndexInstrumentor

LangChainInstrumentor().instrument(tracer_provider=tracer_provider)
LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)

Send to Arize cloud instead

tracer_provider = register(
    project_name="my-rag-app",
    endpoint="https://otlp.arize.com/v1/traces",
    headers={"api_key": ARIZE_API_KEY, "space_id": ARIZE_SPACE_ID},
)

Trace span attributes (OpenInference standard)

Attribute Example
llm.model_name gpt-4o
llm.token_count.prompt 847
llm.token_count.completion 213
llm.input_messages.0.message.content full prompt text
output.value model output
retrieval.documents.*.document.content chunks fetched in RAG

FAQ

Q: Phoenix vs Langfuse vs LangSmith? A: Phoenix is OpenInference-native — vendor-neutral OTel attributes that any backend can read. Langfuse has stronger prompt management and self-host story. LangSmith is best if you live in LangChain. Phoenix is the choice when you want OTel and may switch backends.

Q: Does Phoenix need a database? A: Local mode uses SQLite under ~/.phoenix. Production self-host swaps to Postgres via PHOENIX_SQL_DATABASE_URL. Arize cloud handles persistence for you. SQLite is fine for solo dev with <10K traces.

Q: Can I see traces from a notebook? A: Yes — phoenix.launch_app() opens the UI inline as a Jupyter widget or new tab. Combine with phoenix.evals to run LLM-as-judge evals and view them next to traces.


Source & Thanks

Built by Arize AI. Licensed under Apache-2.0.

Arize-ai/phoenix — ⭐ 4,500+

🙏

Fuente y agradecimientos

Built by Arize AI. Licensed under Apache-2.0.

Arize-ai/phoenix — ⭐ 4,500+

Discusión

Inicia sesión para unirte a la discusión.
Aún no hay comentarios. Sé el primero en compartir tus ideas.

Activos relacionados