Cette page est affichée en anglais. Une traduction française est en cours.
SkillsMay 8, 2026·4 min de lecture

Grok Live Search Tool — Real-Time Web Grounding via API

Grok Live Search grounds output in fresh web/X/news inside one API call. Whitelist sources, set max results, get inline citations.

xAI
xAI · Community
Prêt pour agents

Cet actif peut être lu et installé directement par les agents

TokRepo expose une commande CLI universelle, un contrat d'installation, le metadata JSON, un plan selon l'adaptateur et le contenu raw pour aider les agents à juger l'adaptation, le risque et les prochaines actions.

Stage only · 17/100Stage only
Surface agent
Tout agent MCP/CLI
Type
Skill
Installation
Stage only
Confiance
Confiance : New
Point d'entrée
Asset
Commande CLI universelle
npx tokrepo install 7f7bff2c-8bfd-490d-8802-5b2f14f49ac2
Introduction

Grok Live Search is a server-side tool baked into xAI's API that grounds Grok's response on fresh web, X (Twitter), and news results without an external retrieval pipeline. You set mode=on/auto, choose source types, and Grok handles search + read + cite. Returns inline citations plus a num_sources_used field. Best for: news Q&A, finance/sports/election apps, anywhere the answer must reflect today's reality. Works with: any OpenAI-compatible client (Python, JS, curl) hitting api.x.ai. Setup time: 2 minutes.


Curl example

curl https://api.x.ai/v1/chat/completions \
  -H "Authorization: Bearer $XAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "grok-3",
    "messages": [{"role":"user","content":"Top 3 AI funding rounds this week, with amounts and lead investors"}],
    "search_parameters": {
      "mode": "on",
      "sources": [{"type":"web"},{"type":"news"},{"type":"x"}],
      "max_search_results": 10,
      "from_date": "2026-05-01",
      "to_date":   "2026-05-08"
    }
  }'

Python with date range + X handle filter

resp = client.chat.completions.create(
    model="grok-3",
    messages=[{"role": "user", "content": "What is @sama tweeting about OpenAI's new release?"}],
    extra_body={
        "search_parameters": {
            "mode": "on",
            "sources": [
                {"type": "x", "x_handles": ["sama"]},
            ],
            "from_date": "2026-05-05",
            "max_search_results": 5,
        }
    },
)

Source types

Type Filters What it queries
web none Public web search index
news none News article corpus
x x_handles[] X (Twitter) posts
rss links[] Specific RSS feeds you supply

Modes

  • off — pure model knowledge cutoff (default)
  • auto — Grok decides whether to search based on the question
  • on — always search (use for news/finance/sports/election queries)

Response surface

resp.choices[0].message.content       # grounded answer
resp.choices[0].message.citations     # list of {url, title, source_type}
resp.usage.num_sources_used           # how many results actually informed the answer

FAQ

Q: Cost of Live Search? A: Per-search billing on top of token cost — typically a few cents per call depending on max_search_results. Check console.x.ai for current rates. Cheaper than running your own search + scraper + chunker.

Q: Does it replace Tavily / Exa / Perplexity API? A: For Grok users, mostly yes — search + grounding in one call, fewer moving parts. Tavily/Exa are model-agnostic so still useful when your stack is multi-model. Perplexity API competes head-on; Grok wins on long-context, Perplexity wins on academic/citation depth.

Q: How do I cache results to save cost? A: Hash the (query, source_filter, date_range) triple as a cache key, store the response with a TTL matching your freshness needs (5 min for finance, 1 hour for general news). xAI doesn't cache server-side.


Quick Use

  1. Add extra_body={'search_parameters': {'mode':'on','sources':[{'type':'web'}]}} to chat.completions.create
  2. Read message.citations to render footnotes
  3. Cache by (query, sources, date_range) hash

Intro

Grok Live Search is a server-side tool baked into xAI's API that grounds Grok's response on fresh web, X (Twitter), and news results without an external retrieval pipeline. You set mode=on/auto, choose source types, and Grok handles search + read + cite. Returns inline citations plus a num_sources_used field. Best for: news Q&A, finance/sports/election apps, anywhere the answer must reflect today's reality. Works with: any OpenAI-compatible client (Python, JS, curl) hitting api.x.ai. Setup time: 2 minutes.


Curl example

curl https://api.x.ai/v1/chat/completions \
  -H "Authorization: Bearer $XAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "grok-3",
    "messages": [{"role":"user","content":"Top 3 AI funding rounds this week, with amounts and lead investors"}],
    "search_parameters": {
      "mode": "on",
      "sources": [{"type":"web"},{"type":"news"},{"type":"x"}],
      "max_search_results": 10,
      "from_date": "2026-05-01",
      "to_date":   "2026-05-08"
    }
  }'

Python with date range + X handle filter

resp = client.chat.completions.create(
    model="grok-3",
    messages=[{"role": "user", "content": "What is @sama tweeting about OpenAI's new release?"}],
    extra_body={
        "search_parameters": {
            "mode": "on",
            "sources": [
                {"type": "x", "x_handles": ["sama"]},
            ],
            "from_date": "2026-05-05",
            "max_search_results": 5,
        }
    },
)

Source types

Type Filters What it queries
web none Public web search index
news none News article corpus
x x_handles[] X (Twitter) posts
rss links[] Specific RSS feeds you supply

Modes

  • off — pure model knowledge cutoff (default)
  • auto — Grok decides whether to search based on the question
  • on — always search (use for news/finance/sports/election queries)

Response surface

resp.choices[0].message.content       # grounded answer
resp.choices[0].message.citations     # list of {url, title, source_type}
resp.usage.num_sources_used           # how many results actually informed the answer

FAQ

Q: Cost of Live Search? A: Per-search billing on top of token cost — typically a few cents per call depending on max_search_results. Check console.x.ai for current rates. Cheaper than running your own search + scraper + chunker.

Q: Does it replace Tavily / Exa / Perplexity API? A: For Grok users, mostly yes — search + grounding in one call, fewer moving parts. Tavily/Exa are model-agnostic so still useful when your stack is multi-model. Perplexity API competes head-on; Grok wins on long-context, Perplexity wins on academic/citation depth.

Q: How do I cache results to save cost? A: Hash the (query, source_filter, date_range) triple as a cache key, store the response with a TTL matching your freshness needs (5 min for finance, 1 hour for general news). xAI doesn't cache server-side.


Source & Thanks

Built by xAI. Live Search docs at docs.x.ai/docs/guides/live-search.

Public SDK: xai-org

🙏

Source et remerciements

Built by xAI. Live Search docs at docs.x.ai/docs/guides/live-search.

Public SDK: xai-org

Fil de discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires