Cette page est affichée en anglais. Une traduction française est en cours.
SkillsMay 11, 2026·4 min de lecture

ElevenLabs ConvAI — Full-Duplex Voice Agent Platform

ElevenLabs ConvAI bundles STT, LLM, TTS, VAD, barge-in into one managed voice agent. Define prompt, attach tools, point at Twilio number.

Prêt pour agents

Cet actif peut être lu et installé directement par les agents

TokRepo expose une commande CLI universelle, un contrat d'installation, le metadata JSON, un plan selon l'adaptateur et le contenu raw pour aider les agents à juger l'adaptation, le risque et les prochaines actions.

Needs Confirmation · 52/100Policy : confirmer
Surface agent
Tout agent MCP/CLI
Type
Skill
Installation
Single
Confiance
Confiance : New
Point d'entrée
Asset
Commande CLI universelle
npx tokrepo install 91880bcc-996e-46d0-baf8-ad67d7b48aed
Introduction

ElevenLabs Conversational AI is a managed voice agent platform — bring an agent prompt and an optional function-call schema, ElevenLabs handles STT, LLM routing, ElevenLabs TTS, VAD, and barge-in. Drop it into a web widget, a Twilio phone number, or any custom WebRTC client. Best for: customer support bots, lead qualification calls, voice characters in mobile apps, anyone who wants sub-1-second voice without wiring four providers. Works with: Web JS SDK, Python SDK, Twilio Voice integration. Setup time: 15 minutes.


Create an agent (dashboard or API)

import requests

agent = requests.post(
    "https://api.elevenlabs.io/v1/convai/agents/create",
    headers={"xi-api-key": API_KEY},
    json={
        "name": "Refund Triage",
        "conversation_config": {
            "agent": {
                "prompt": {
                    "prompt": "You triage refund requests. Confirm order ID, reason, then offer to escalate to a human if amount > $500.",
                    "llm": "gpt-4o-mini",
                },
                "first_message": "Hi, I can help with refunds. What's your order ID?",
                "language": "en",
            },
            "tts": {"voice_id": "21m00Tcm4TlvDq8ikWAM", "model_id": "eleven_turbo_v2_5"},
            "asr": {"provider": "elevenlabs", "language": "en"},
        },
    },
).json()
print(agent["agent_id"])

Attach a tool (function call)

tools = [{
    "type": "client",
    "name": "lookup_order",
    "description": "Look up order status by ID",
    "parameters": {
        "type": "object",
        "properties": {"order_id": {"type": "string"}},
        "required": ["order_id"],
    },
}]

requests.patch(
    f"https://api.elevenlabs.io/v1/convai/agents/{agent_id}",
    headers={"xi-api-key": API_KEY},
    json={"conversation_config": {"agent": {"tools": tools}}},
)

Embed in a web app

<!-- One-line embed via official widget -->
<elevenlabs-convai agent-id="YOUR_AGENT_ID"></elevenlabs-convai>
<script src="https://elevenlabs.io/convai-widget/index.js" async type="text/javascript"></script>

Wire to Twilio (phone agent)

# In Twilio Voice webhook
from twilio.twiml.voice_response import VoiceResponse, Connect

response = VoiceResponse()
connect = Connect()
connect.stream(url="wss://api.elevenlabs.io/v1/convai/conversation?agent_id=YOUR_AGENT_ID")
response.append(connect)
return str(response)

Pricing tiers (May 2026)

Plan Minutes/month Per extra min
Starter ($5) 60 $0.30
Creator ($22) 250 $0.20
Pro ($99) 1,100 $0.18
Scale ($330) 3,800 $0.15
Business ($1,320) 13,500 $0.12

FAQ

Q: ConvAI vs LiveKit Agents? A: ConvAI is managed and opinionated — one bill, ElevenLabs TTS, fast to ship. LiveKit Agents is self-assembled — you pick STT/LLM/TTS, run on your infra, cheaper at scale, more control. Use ConvAI for fast launch, LiveKit when scale or vendor diversity matters.

Q: Can I use a non-ElevenLabs LLM? A: Yes — set llm to a custom endpoint with OpenAI-compatible schema. Or use Anthropic / Gemini through the supported list. ElevenLabs handles the orchestration regardless of which LLM serves tokens.

Q: Latency in practice? A: Typical round-trip 600-900ms on Pro tier with Turbo v2.5 TTS. Long agents with verbose tool calls slip into 1.2-1.8s. Use first_message and avoid huge system prompts to keep TTFR (time-to-first-response) tight.


Quick Use

  1. Create agent at elevenlabs.io/app/conversational-ai (or via API)
  2. Set prompt, voice, optional tools schema
  3. Embed via <elevenlabs-convai> widget or connect to Twilio Stream

Intro

ElevenLabs Conversational AI is a managed voice agent platform — bring an agent prompt and an optional function-call schema, ElevenLabs handles STT, LLM routing, ElevenLabs TTS, VAD, and barge-in. Drop it into a web widget, a Twilio phone number, or any custom WebRTC client. Best for: customer support bots, lead qualification calls, voice characters in mobile apps, anyone who wants sub-1-second voice without wiring four providers. Works with: Web JS SDK, Python SDK, Twilio Voice integration. Setup time: 15 minutes.


Create an agent (dashboard or API)

import requests

agent = requests.post(
    "https://api.elevenlabs.io/v1/convai/agents/create",
    headers={"xi-api-key": API_KEY},
    json={
        "name": "Refund Triage",
        "conversation_config": {
            "agent": {
                "prompt": {
                    "prompt": "You triage refund requests. Confirm order ID, reason, then offer to escalate to a human if amount > $500.",
                    "llm": "gpt-4o-mini",
                },
                "first_message": "Hi, I can help with refunds. What's your order ID?",
                "language": "en",
            },
            "tts": {"voice_id": "21m00Tcm4TlvDq8ikWAM", "model_id": "eleven_turbo_v2_5"},
            "asr": {"provider": "elevenlabs", "language": "en"},
        },
    },
).json()
print(agent["agent_id"])

Attach a tool (function call)

tools = [{
    "type": "client",
    "name": "lookup_order",
    "description": "Look up order status by ID",
    "parameters": {
        "type": "object",
        "properties": {"order_id": {"type": "string"}},
        "required": ["order_id"],
    },
}]

requests.patch(
    f"https://api.elevenlabs.io/v1/convai/agents/{agent_id}",
    headers={"xi-api-key": API_KEY},
    json={"conversation_config": {"agent": {"tools": tools}}},
)

Embed in a web app

<!-- One-line embed via official widget -->
<elevenlabs-convai agent-id="YOUR_AGENT_ID"></elevenlabs-convai>
<script src="https://elevenlabs.io/convai-widget/index.js" async type="text/javascript"></script>

Wire to Twilio (phone agent)

# In Twilio Voice webhook
from twilio.twiml.voice_response import VoiceResponse, Connect

response = VoiceResponse()
connect = Connect()
connect.stream(url="wss://api.elevenlabs.io/v1/convai/conversation?agent_id=YOUR_AGENT_ID")
response.append(connect)
return str(response)

Pricing tiers (May 2026)

Plan Minutes/month Per extra min
Starter ($5) 60 $0.30
Creator ($22) 250 $0.20
Pro ($99) 1,100 $0.18
Scale ($330) 3,800 $0.15
Business ($1,320) 13,500 $0.12

FAQ

Q: ConvAI vs LiveKit Agents? A: ConvAI is managed and opinionated — one bill, ElevenLabs TTS, fast to ship. LiveKit Agents is self-assembled — you pick STT/LLM/TTS, run on your infra, cheaper at scale, more control. Use ConvAI for fast launch, LiveKit when scale or vendor diversity matters.

Q: Can I use a non-ElevenLabs LLM? A: Yes — set llm to a custom endpoint with OpenAI-compatible schema. Or use Anthropic / Gemini through the supported list. ElevenLabs handles the orchestration regardless of which LLM serves tokens.

Q: Latency in practice? A: Typical round-trip 600-900ms on Pro tier with Turbo v2.5 TTS. Long agents with verbose tool calls slip into 1.2-1.8s. Use first_message and avoid huge system prompts to keep TTFR (time-to-first-response) tight.


Source & Thanks

Built by ElevenLabs. ConvAI docs at elevenlabs.io/docs/conversational-ai.

elevenlabs/elevenlabs-python — official SDK

🙏

Source et remerciements

Fil de discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires