Cette page est affichée en anglais. Une traduction française est en cours.
KnowledgeMay 7, 2026·4 min de lecture

Mistral Codestral — 22B Open-Weight Coding Model

Codestral is Mistral's coding-specialized model. 22B params, 32K context, 80+ languages including Python, JS, Go, Rust, Bash. Free for non-commercial use.

Prêt pour agents

Cet actif peut être lu et installé directement par les agents

TokRepo expose une commande CLI universelle, un contrat d'installation, le metadata JSON, un plan selon l'adaptateur et le contenu raw pour aider les agents à juger l'adaptation, le risque et les prochaines actions.

Stage only · 15/100Stage only
Surface agent
Tout agent MCP/CLI
Type
Knowledge
Installation
Stage only
Confiance
Confiance : New
Point d'entrée
Asset
Commande CLI universelle
npx tokrepo install 055b165e-d732-4e1d-a1b3-70be25393826
Introduction

Codestral is Mistral AI's coding-specialized model — 22B parameters, 32K context, trained on 80+ languages from Python and JavaScript to Bash and Fortran. The open-weight Codestral 22B is free for research and non-commercial use; the latest Codestral models are available via Mistral's API and Le Chat for production. Best for: local code completion via Continue / Aider, latency-sensitive code generation. Works with: Ollama, vLLM, llama.cpp, Continue, Aider, LiteLLM. Setup time: 5 minutes (Ollama pull or API key).


Run Codestral locally with Ollama

# Pull
ollama pull codestral

# Chat
ollama run codestral

# Use as completion server (port 11434)
curl http://localhost:11434/api/generate -d '{
  "model": "codestral",
  "prompt": "Write a Rust function that returns the Nth Fibonacci number using memoization."
}'

Use Codestral with Continue

// config.json in your Continue extension
{
  "models": [
    {
      "title": "Codestral (local)",
      "provider": "ollama",
      "model": "codestral",
      "apiBase": "http://localhost:11434"
    }
  ],
  "tabAutocompleteModel": {
    "title": "Codestral Autocomplete",
    "provider": "ollama",
    "model": "codestral"
  }
}

Use Codestral with Aider

# Via Mistral API (production)
export MISTRAL_API_KEY=your-key
aider --model mistral/codestral-latest

# Or via local Ollama
aider --model ollama/codestral

Through LiteLLM

from litellm import completion

response = completion(
    model="codestral/codestral-latest",
    messages=[{"role": "user", "content": "Refactor this Python function to use list comprehensions"}],
    api_key=os.environ["CODESTRAL_API_KEY"],
)

Codestral vs general Mistral models

Model Specialty Best for
Mistral Small General-purpose Chat, summarization, RAG
Codestral Code-only Tab completion, refactor, code Q&A
Mixtral 8x7B Mixture-of-experts High-throughput inference

FAQ

Q: Is Codestral free? A: The Codestral 22B open-weights are free under the Mistral AI Non-Production License (research, non-commercial). For production / commercial use, call Codestral via Mistral's paid API on la-platforme.mistral.ai.

Q: How does Codestral compare to Claude / GPT for coding? A: On code-specific benchmarks (HumanEval, MBPP) Codestral is competitive with Claude 3.5 Sonnet. For broad reasoning + coding combined, Claude / GPT-4 still lead. Codestral's wins are local deployment and lower latency.

Q: Can I fine-tune Codestral? A: Yes — Mistral provides LoRA fine-tuning recipes for Codestral. The open-weight version can be fine-tuned with any standard tool (axolotl, unsloth, trl). Mistral's hosted fine-tuning API is also available.


Quick Use

  1. Local: ollama pull codestral && ollama run codestral
  2. API: get a key at console.mistral.ai, then export MISTRAL_API_KEY=...
  3. Plug into Continue / Aider / LiteLLM with the snippets below

Intro

Codestral is Mistral AI's coding-specialized model — 22B parameters, 32K context, trained on 80+ languages from Python and JavaScript to Bash and Fortran. The open-weight Codestral 22B is free for research and non-commercial use; the latest Codestral models are available via Mistral's API and Le Chat for production. Best for: local code completion via Continue / Aider, latency-sensitive code generation. Works with: Ollama, vLLM, llama.cpp, Continue, Aider, LiteLLM. Setup time: 5 minutes (Ollama pull or API key).


Run Codestral locally with Ollama

# Pull
ollama pull codestral

# Chat
ollama run codestral

# Use as completion server (port 11434)
curl http://localhost:11434/api/generate -d '{
  "model": "codestral",
  "prompt": "Write a Rust function that returns the Nth Fibonacci number using memoization."
}'

Use Codestral with Continue

// config.json in your Continue extension
{
  "models": [
    {
      "title": "Codestral (local)",
      "provider": "ollama",
      "model": "codestral",
      "apiBase": "http://localhost:11434"
    }
  ],
  "tabAutocompleteModel": {
    "title": "Codestral Autocomplete",
    "provider": "ollama",
    "model": "codestral"
  }
}

Use Codestral with Aider

# Via Mistral API (production)
export MISTRAL_API_KEY=your-key
aider --model mistral/codestral-latest

# Or via local Ollama
aider --model ollama/codestral

Through LiteLLM

from litellm import completion

response = completion(
    model="codestral/codestral-latest",
    messages=[{"role": "user", "content": "Refactor this Python function to use list comprehensions"}],
    api_key=os.environ["CODESTRAL_API_KEY"],
)

Codestral vs general Mistral models

Model Specialty Best for
Mistral Small General-purpose Chat, summarization, RAG
Codestral Code-only Tab completion, refactor, code Q&A
Mixtral 8x7B Mixture-of-experts High-throughput inference

FAQ

Q: Is Codestral free? A: The Codestral 22B open-weights are free under the Mistral AI Non-Production License (research, non-commercial). For production / commercial use, call Codestral via Mistral's paid API on la-platforme.mistral.ai.

Q: How does Codestral compare to Claude / GPT for coding? A: On code-specific benchmarks (HumanEval, MBPP) Codestral is competitive with Claude 3.5 Sonnet. For broad reasoning + coding combined, Claude / GPT-4 still lead. Codestral's wins are local deployment and lower latency.

Q: Can I fine-tune Codestral? A: Yes — Mistral provides LoRA fine-tuning recipes for Codestral. The open-weight version can be fine-tuned with any standard tool (axolotl, unsloth, trl). Mistral's hosted fine-tuning API is also available.


Source & Thanks

Built by Mistral AI. Open-weights under Mistral AI Non-Production License.

Codestral on Hugging Face — ⭐ 1,500+

🙏

Source et remerciements

Built by Mistral AI. Open-weights under Mistral AI Non-Production License.

Codestral on Hugging Face — ⭐ 1,500+

Fil de discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires