Quick Use
- Add
extra_body={'search_parameters': {'mode':'on','sources':[{'type':'web'}]}}to chat.completions.create - Read
message.citationsto render footnotes - Cache by (query, sources, date_range) hash
Intro
Grok Live Search is a server-side tool baked into xAI's API that grounds Grok's response on fresh web, X (Twitter), and news results without an external retrieval pipeline. You set mode=on/auto, choose source types, and Grok handles search + read + cite. Returns inline citations plus a num_sources_used field. Best for: news Q&A, finance/sports/election apps, anywhere the answer must reflect today's reality. Works with: any OpenAI-compatible client (Python, JS, curl) hitting api.x.ai. Setup time: 2 minutes.
Curl example
curl https://api.x.ai/v1/chat/completions \
-H "Authorization: Bearer $XAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "grok-3",
"messages": [{"role":"user","content":"Top 3 AI funding rounds this week, with amounts and lead investors"}],
"search_parameters": {
"mode": "on",
"sources": [{"type":"web"},{"type":"news"},{"type":"x"}],
"max_search_results": 10,
"from_date": "2026-05-01",
"to_date": "2026-05-08"
}
}'Python with date range + X handle filter
resp = client.chat.completions.create(
model="grok-3",
messages=[{"role": "user", "content": "What is @sama tweeting about OpenAI's new release?"}],
extra_body={
"search_parameters": {
"mode": "on",
"sources": [
{"type": "x", "x_handles": ["sama"]},
],
"from_date": "2026-05-05",
"max_search_results": 5,
}
},
)Source types
| Type | Filters | What it queries |
|---|---|---|
web |
none | Public web search index |
news |
none | News article corpus |
x |
x_handles[] |
X (Twitter) posts |
rss |
links[] |
Specific RSS feeds you supply |
Modes
off— pure model knowledge cutoff (default)auto— Grok decides whether to search based on the questionon— always search (use for news/finance/sports/election queries)
Response surface
resp.choices[0].message.content # grounded answer
resp.choices[0].message.citations # list of {url, title, source_type}
resp.usage.num_sources_used # how many results actually informed the answerFAQ
Q: Cost of Live Search?
A: Per-search billing on top of token cost — typically a few cents per call depending on max_search_results. Check console.x.ai for current rates. Cheaper than running your own search + scraper + chunker.
Q: Does it replace Tavily / Exa / Perplexity API? A: For Grok users, mostly yes — search + grounding in one call, fewer moving parts. Tavily/Exa are model-agnostic so still useful when your stack is multi-model. Perplexity API competes head-on; Grok wins on long-context, Perplexity wins on academic/citation depth.
Q: How do I cache results to save cost? A: Hash the (query, source_filter, date_range) triple as a cache key, store the response with a TTL matching your freshness needs (5 min for finance, 1 hour for general news). xAI doesn't cache server-side.
Source & Thanks
Built by xAI. Live Search docs at docs.x.ai/docs/guides/live-search.
Public SDK: xai-org