Esta página se muestra en inglés. Una traducción al español está en curso.
KnowledgeMay 8, 2026·4 min de lectura

Cohere Command R — Long-Context Tool-Use Model for Agents

Command R+ is Cohere's flagship LLM. 128K context, native tool use, RAG-tuned, multilingual. Cheaper than Claude Sonnet, comparable on tool-use benchmarks.

Listo para agents

Este activo puede ser leído e instalado directamente por agents

TokRepo expone un comando CLI universal, contrato de instalación, metadata JSON, plan según adaptador y contenido raw para que los agents evalúen compatibilidad, riesgo y próximos pasos.

Stage only · 15/100Stage only
Superficie agent
Cualquier agent MCP/CLI
Tipo
Knowledge
Instalación
Stage only
Confianza
Confianza: New
Entrada
Asset
Comando CLI universal
npx tokrepo install cbb6a0ef-3d99-4941-be1c-3baba50c3ebb
Introducción

Command R+ is Cohere's flagship enterprise LLM — 128K context, native tool use, multilingual (10+ languages with strong fluency), and specifically tuned for RAG and agent workloads. Pricing slots between GPT-4o-mini and Claude Sonnet, with comparable benchmark performance on tool use and multi-step reasoning. Best for: enterprise agents, multilingual customer-facing AI, RAG pipelines where citations matter. Works with: Cohere API, AWS Bedrock, Azure, Oracle Cloud. Setup time: 2 minutes.


Hello, Command R+

import cohere

co = cohere.Client(os.environ["COHERE_API_KEY"])

response = co.chat(
    model="command-r-plus-08-2024",
    message="Compare LFP and NMC battery chemistries.",
    temperature=0.3,
)

print(response.text)

Native tool use

tools = [{
    "name": "get_weather",
    "description": "Get current weather",
    "parameter_definitions": {
        "city": {"description": "City name", "type": "str", "required": True},
    },
}]

response = co.chat(
    model="command-r-plus-08-2024",
    message="What's the weather in Tokyo and Berlin? Compare.",
    tools=tools,
)

# Loop until no more tool calls
while response.tool_calls:
    tool_results = []
    for tc in response.tool_calls:
        result = call_my_tool(tc.name, tc.parameters)
        tool_results.append({"call": tc, "outputs": [{"result": result}]})

    response = co.chat(
        model="command-r-plus-08-2024",
        message="",
        tools=tools,
        tool_results=tool_results,
    )

print(response.text)

Built-in RAG mode

documents = [
    {"title": "Doc 1", "snippet": "PyTorch is..."},
    {"title": "Doc 2", "snippet": "TensorFlow is..."},
]

response = co.chat(
    model="command-r-plus-08-2024",
    message="Compare PyTorch and TensorFlow",
    documents=documents,
)

print(response.text)
# Output cites docs by ID — see response.citations
for c in response.citations:
    print(f"{c.text}{c.document_ids}")

Pricing snapshot (vs alternatives)

Model Input $/1M tok Output $/1M tok
Claude 3.5 Sonnet $3 $15
Command R+ $2.50 $10
GPT-4o $2.50 $10
Command R $0.50 $1.50
GPT-4o-mini $0.15 $0.60

FAQ

Q: Is Command R free? A: Cohere offers free trial credits on signup. After that, pay-per-token via cohere.com or via AWS Bedrock / Azure with their billing. Free tier suitable for prototyping; production needs a paid plan.

Q: How does Command R+ compare to Claude Sonnet? A: On English benchmarks Sonnet leads slightly. Command R+ is competitive on tool-use and multilingual tasks at lower price. For enterprise / multilingual / RAG-heavy use cases Command R+ is often the better $/quality.

Q: Does Command R support function calling like OpenAI? A: Yes — native tool use is first-class. Schema is similar but uses parameter_definitions instead of parameters. Cohere's SDK normalizes; with raw API account for the format diff.


Quick Use

  1. Sign up at dashboard.cohere.com → copy API key
  2. pip install cohere (or npm install cohere-ai)
  3. co.chat(model='command-r-plus-08-2024', message='...') — add tools= and documents= for tool use / RAG

Intro

Command R+ is Cohere's flagship enterprise LLM — 128K context, native tool use, multilingual (10+ languages with strong fluency), and specifically tuned for RAG and agent workloads. Pricing slots between GPT-4o-mini and Claude Sonnet, with comparable benchmark performance on tool use and multi-step reasoning. Best for: enterprise agents, multilingual customer-facing AI, RAG pipelines where citations matter. Works with: Cohere API, AWS Bedrock, Azure, Oracle Cloud. Setup time: 2 minutes.


Hello, Command R+

import cohere

co = cohere.Client(os.environ["COHERE_API_KEY"])

response = co.chat(
    model="command-r-plus-08-2024",
    message="Compare LFP and NMC battery chemistries.",
    temperature=0.3,
)

print(response.text)

Native tool use

tools = [{
    "name": "get_weather",
    "description": "Get current weather",
    "parameter_definitions": {
        "city": {"description": "City name", "type": "str", "required": True},
    },
}]

response = co.chat(
    model="command-r-plus-08-2024",
    message="What's the weather in Tokyo and Berlin? Compare.",
    tools=tools,
)

# Loop until no more tool calls
while response.tool_calls:
    tool_results = []
    for tc in response.tool_calls:
        result = call_my_tool(tc.name, tc.parameters)
        tool_results.append({"call": tc, "outputs": [{"result": result}]})

    response = co.chat(
        model="command-r-plus-08-2024",
        message="",
        tools=tools,
        tool_results=tool_results,
    )

print(response.text)

Built-in RAG mode

documents = [
    {"title": "Doc 1", "snippet": "PyTorch is..."},
    {"title": "Doc 2", "snippet": "TensorFlow is..."},
]

response = co.chat(
    model="command-r-plus-08-2024",
    message="Compare PyTorch and TensorFlow",
    documents=documents,
)

print(response.text)
# Output cites docs by ID — see response.citations
for c in response.citations:
    print(f"{c.text}{c.document_ids}")

Pricing snapshot (vs alternatives)

Model Input $/1M tok Output $/1M tok
Claude 3.5 Sonnet $3 $15
Command R+ $2.50 $10
GPT-4o $2.50 $10
Command R $0.50 $1.50
GPT-4o-mini $0.15 $0.60

FAQ

Q: Is Command R free? A: Cohere offers free trial credits on signup. After that, pay-per-token via cohere.com or via AWS Bedrock / Azure with their billing. Free tier suitable for prototyping; production needs a paid plan.

Q: How does Command R+ compare to Claude Sonnet? A: On English benchmarks Sonnet leads slightly. Command R+ is competitive on tool-use and multilingual tasks at lower price. For enterprise / multilingual / RAG-heavy use cases Command R+ is often the better $/quality.

Q: Does Command R support function calling like OpenAI? A: Yes — native tool use is first-class. Schema is similar but uses parameter_definitions instead of parameters. Cohere's SDK normalizes; with raw API account for the format diff.


Source & Thanks

Built by Cohere. Commercial product with free trial.

docs.cohere.com — Command R+ documentation

🙏

Fuente y agradecimientos

Built by Cohere. Commercial product with free trial.

docs.cohere.com — Command R+ documentation

Discusión

Inicia sesión para unirte a la discusión.
Aún no hay comentarios. Sé el primero en compartir tus ideas.

Activos relacionados