Quick Use
- Local:
ollama pull codestral && ollama run codestral - API: get a key at console.mistral.ai, then
export MISTRAL_API_KEY=... - Plug into Continue / Aider / LiteLLM with the snippets below
Intro
Codestral is Mistral AI's coding-specialized model — 22B parameters, 32K context, trained on 80+ languages from Python and JavaScript to Bash and Fortran. The open-weight Codestral 22B is free for research and non-commercial use; the latest Codestral models are available via Mistral's API and Le Chat for production. Best for: local code completion via Continue / Aider, latency-sensitive code generation. Works with: Ollama, vLLM, llama.cpp, Continue, Aider, LiteLLM. Setup time: 5 minutes (Ollama pull or API key).
Run Codestral locally with Ollama
# Pull
ollama pull codestral
# Chat
ollama run codestral
# Use as completion server (port 11434)
curl http://localhost:11434/api/generate -d '{
"model": "codestral",
"prompt": "Write a Rust function that returns the Nth Fibonacci number using memoization."
}'Use Codestral with Continue
// config.json in your Continue extension
{
"models": [
{
"title": "Codestral (local)",
"provider": "ollama",
"model": "codestral",
"apiBase": "http://localhost:11434"
}
],
"tabAutocompleteModel": {
"title": "Codestral Autocomplete",
"provider": "ollama",
"model": "codestral"
}
}Use Codestral with Aider
# Via Mistral API (production)
export MISTRAL_API_KEY=your-key
aider --model mistral/codestral-latest
# Or via local Ollama
aider --model ollama/codestralThrough LiteLLM
from litellm import completion
response = completion(
model="codestral/codestral-latest",
messages=[{"role": "user", "content": "Refactor this Python function to use list comprehensions"}],
api_key=os.environ["CODESTRAL_API_KEY"],
)Codestral vs general Mistral models
| Model | Specialty | Best for |
|---|---|---|
| Mistral Small | General-purpose | Chat, summarization, RAG |
| Codestral | Code-only | Tab completion, refactor, code Q&A |
| Mixtral 8x7B | Mixture-of-experts | High-throughput inference |
FAQ
Q: Is Codestral free? A: The Codestral 22B open-weights are free under the Mistral AI Non-Production License (research, non-commercial). For production / commercial use, call Codestral via Mistral's paid API on la-platforme.mistral.ai.
Q: How does Codestral compare to Claude / GPT for coding? A: On code-specific benchmarks (HumanEval, MBPP) Codestral is competitive with Claude 3.5 Sonnet. For broad reasoning + coding combined, Claude / GPT-4 still lead. Codestral's wins are local deployment and lower latency.
Q: Can I fine-tune Codestral? A: Yes — Mistral provides LoRA fine-tuning recipes for Codestral. The open-weight version can be fine-tuned with any standard tool (axolotl, unsloth, trl). Mistral's hosted fine-tuning API is also available.
Source & Thanks
Built by Mistral AI. Open-weights under Mistral AI Non-Production License.
Codestral on Hugging Face — ⭐ 1,500+