Esta página se muestra en inglés. Una traducción al español está en curso.
SkillsMay 8, 2026·4 min de lectura

Grok Live Search Tool — Real-Time Web Grounding via API

Grok Live Search grounds output in fresh web/X/news inside one API call. Whitelist sources, set max results, get inline citations.

xAI
xAI · Community
Listo para agents

Este activo puede ser leído e instalado directamente por agents

TokRepo expone un comando CLI universal, contrato de instalación, metadata JSON, plan según adaptador y contenido raw para que los agents evalúen compatibilidad, riesgo y próximos pasos.

Stage only · 17/100Stage only
Superficie agent
Cualquier agent MCP/CLI
Tipo
Skill
Instalación
Stage only
Confianza
Confianza: New
Entrada
Asset
Comando CLI universal
npx tokrepo install 7f7bff2c-8bfd-490d-8802-5b2f14f49ac2
Introducción

Grok Live Search is a server-side tool baked into xAI's API that grounds Grok's response on fresh web, X (Twitter), and news results without an external retrieval pipeline. You set mode=on/auto, choose source types, and Grok handles search + read + cite. Returns inline citations plus a num_sources_used field. Best for: news Q&A, finance/sports/election apps, anywhere the answer must reflect today's reality. Works with: any OpenAI-compatible client (Python, JS, curl) hitting api.x.ai. Setup time: 2 minutes.


Curl example

curl https://api.x.ai/v1/chat/completions \
  -H "Authorization: Bearer $XAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "grok-3",
    "messages": [{"role":"user","content":"Top 3 AI funding rounds this week, with amounts and lead investors"}],
    "search_parameters": {
      "mode": "on",
      "sources": [{"type":"web"},{"type":"news"},{"type":"x"}],
      "max_search_results": 10,
      "from_date": "2026-05-01",
      "to_date":   "2026-05-08"
    }
  }'

Python with date range + X handle filter

resp = client.chat.completions.create(
    model="grok-3",
    messages=[{"role": "user", "content": "What is @sama tweeting about OpenAI's new release?"}],
    extra_body={
        "search_parameters": {
            "mode": "on",
            "sources": [
                {"type": "x", "x_handles": ["sama"]},
            ],
            "from_date": "2026-05-05",
            "max_search_results": 5,
        }
    },
)

Source types

Type Filters What it queries
web none Public web search index
news none News article corpus
x x_handles[] X (Twitter) posts
rss links[] Specific RSS feeds you supply

Modes

  • off — pure model knowledge cutoff (default)
  • auto — Grok decides whether to search based on the question
  • on — always search (use for news/finance/sports/election queries)

Response surface

resp.choices[0].message.content       # grounded answer
resp.choices[0].message.citations     # list of {url, title, source_type}
resp.usage.num_sources_used           # how many results actually informed the answer

FAQ

Q: Cost of Live Search? A: Per-search billing on top of token cost — typically a few cents per call depending on max_search_results. Check console.x.ai for current rates. Cheaper than running your own search + scraper + chunker.

Q: Does it replace Tavily / Exa / Perplexity API? A: For Grok users, mostly yes — search + grounding in one call, fewer moving parts. Tavily/Exa are model-agnostic so still useful when your stack is multi-model. Perplexity API competes head-on; Grok wins on long-context, Perplexity wins on academic/citation depth.

Q: How do I cache results to save cost? A: Hash the (query, source_filter, date_range) triple as a cache key, store the response with a TTL matching your freshness needs (5 min for finance, 1 hour for general news). xAI doesn't cache server-side.


Quick Use

  1. Add extra_body={'search_parameters': {'mode':'on','sources':[{'type':'web'}]}} to chat.completions.create
  2. Read message.citations to render footnotes
  3. Cache by (query, sources, date_range) hash

Intro

Grok Live Search is a server-side tool baked into xAI's API that grounds Grok's response on fresh web, X (Twitter), and news results without an external retrieval pipeline. You set mode=on/auto, choose source types, and Grok handles search + read + cite. Returns inline citations plus a num_sources_used field. Best for: news Q&A, finance/sports/election apps, anywhere the answer must reflect today's reality. Works with: any OpenAI-compatible client (Python, JS, curl) hitting api.x.ai. Setup time: 2 minutes.


Curl example

curl https://api.x.ai/v1/chat/completions \
  -H "Authorization: Bearer $XAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "grok-3",
    "messages": [{"role":"user","content":"Top 3 AI funding rounds this week, with amounts and lead investors"}],
    "search_parameters": {
      "mode": "on",
      "sources": [{"type":"web"},{"type":"news"},{"type":"x"}],
      "max_search_results": 10,
      "from_date": "2026-05-01",
      "to_date":   "2026-05-08"
    }
  }'

Python with date range + X handle filter

resp = client.chat.completions.create(
    model="grok-3",
    messages=[{"role": "user", "content": "What is @sama tweeting about OpenAI's new release?"}],
    extra_body={
        "search_parameters": {
            "mode": "on",
            "sources": [
                {"type": "x", "x_handles": ["sama"]},
            ],
            "from_date": "2026-05-05",
            "max_search_results": 5,
        }
    },
)

Source types

Type Filters What it queries
web none Public web search index
news none News article corpus
x x_handles[] X (Twitter) posts
rss links[] Specific RSS feeds you supply

Modes

  • off — pure model knowledge cutoff (default)
  • auto — Grok decides whether to search based on the question
  • on — always search (use for news/finance/sports/election queries)

Response surface

resp.choices[0].message.content       # grounded answer
resp.choices[0].message.citations     # list of {url, title, source_type}
resp.usage.num_sources_used           # how many results actually informed the answer

FAQ

Q: Cost of Live Search? A: Per-search billing on top of token cost — typically a few cents per call depending on max_search_results. Check console.x.ai for current rates. Cheaper than running your own search + scraper + chunker.

Q: Does it replace Tavily / Exa / Perplexity API? A: For Grok users, mostly yes — search + grounding in one call, fewer moving parts. Tavily/Exa are model-agnostic so still useful when your stack is multi-model. Perplexity API competes head-on; Grok wins on long-context, Perplexity wins on academic/citation depth.

Q: How do I cache results to save cost? A: Hash the (query, source_filter, date_range) triple as a cache key, store the response with a TTL matching your freshness needs (5 min for finance, 1 hour for general news). xAI doesn't cache server-side.


Source & Thanks

Built by xAI. Live Search docs at docs.x.ai/docs/guides/live-search.

Public SDK: xai-org

🙏

Fuente y agradecimientos

Built by xAI. Live Search docs at docs.x.ai/docs/guides/live-search.

Public SDK: xai-org

Discusión

Inicia sesión para unirte a la discusión.
Aún no hay comentarios. Sé el primero en compartir tus ideas.

Activos relacionados