Quick Use
- Create agent at elevenlabs.io/app/conversational-ai (or via API)
- Set prompt, voice, optional tools schema
- Embed via
<elevenlabs-convai>widget or connect to Twilio Stream
Intro
ElevenLabs Conversational AI is a managed voice agent platform — bring an agent prompt and an optional function-call schema, ElevenLabs handles STT, LLM routing, ElevenLabs TTS, VAD, and barge-in. Drop it into a web widget, a Twilio phone number, or any custom WebRTC client. Best for: customer support bots, lead qualification calls, voice characters in mobile apps, anyone who wants sub-1-second voice without wiring four providers. Works with: Web JS SDK, Python SDK, Twilio Voice integration. Setup time: 15 minutes.
Create an agent (dashboard or API)
import requests
agent = requests.post(
"https://api.elevenlabs.io/v1/convai/agents/create",
headers={"xi-api-key": API_KEY},
json={
"name": "Refund Triage",
"conversation_config": {
"agent": {
"prompt": {
"prompt": "You triage refund requests. Confirm order ID, reason, then offer to escalate to a human if amount > $500.",
"llm": "gpt-4o-mini",
},
"first_message": "Hi, I can help with refunds. What's your order ID?",
"language": "en",
},
"tts": {"voice_id": "21m00Tcm4TlvDq8ikWAM", "model_id": "eleven_turbo_v2_5"},
"asr": {"provider": "elevenlabs", "language": "en"},
},
},
).json()
print(agent["agent_id"])Attach a tool (function call)
tools = [{
"type": "client",
"name": "lookup_order",
"description": "Look up order status by ID",
"parameters": {
"type": "object",
"properties": {"order_id": {"type": "string"}},
"required": ["order_id"],
},
}]
requests.patch(
f"https://api.elevenlabs.io/v1/convai/agents/{agent_id}",
headers={"xi-api-key": API_KEY},
json={"conversation_config": {"agent": {"tools": tools}}},
)Embed in a web app
<!-- One-line embed via official widget -->
<elevenlabs-convai agent-id="YOUR_AGENT_ID"></elevenlabs-convai>
<script src="https://elevenlabs.io/convai-widget/index.js" async type="text/javascript"></script>Wire to Twilio (phone agent)
# In Twilio Voice webhook
from twilio.twiml.voice_response import VoiceResponse, Connect
response = VoiceResponse()
connect = Connect()
connect.stream(url="wss://api.elevenlabs.io/v1/convai/conversation?agent_id=YOUR_AGENT_ID")
response.append(connect)
return str(response)Pricing tiers (May 2026)
| Plan | Minutes/month | Per extra min |
|---|---|---|
| Starter ($5) | 60 | $0.30 |
| Creator ($22) | 250 | $0.20 |
| Pro ($99) | 1,100 | $0.18 |
| Scale ($330) | 3,800 | $0.15 |
| Business ($1,320) | 13,500 | $0.12 |
FAQ
Q: ConvAI vs LiveKit Agents? A: ConvAI is managed and opinionated — one bill, ElevenLabs TTS, fast to ship. LiveKit Agents is self-assembled — you pick STT/LLM/TTS, run on your infra, cheaper at scale, more control. Use ConvAI for fast launch, LiveKit when scale or vendor diversity matters.
Q: Can I use a non-ElevenLabs LLM?
A: Yes — set llm to a custom endpoint with OpenAI-compatible schema. Or use Anthropic / Gemini through the supported list. ElevenLabs handles the orchestration regardless of which LLM serves tokens.
Q: Latency in practice?
A: Typical round-trip 600-900ms on Pro tier with Turbo v2.5 TTS. Long agents with verbose tool calls slip into 1.2-1.8s. Use first_message and avoid huge system prompts to keep TTFR (time-to-first-response) tight.
Source & Thanks
Built by ElevenLabs. ConvAI docs at elevenlabs.io/docs/conversational-ai.
elevenlabs/elevenlabs-python — official SDK