ConfigsApr 14, 2026·3 min read

LibreTranslate — Self-Hosted Translation API with No Rate Limits

LibreTranslate is a self-hostable translation API powered by open-source Argos Translate models. No API keys, no rate limits, no data sent to third parties — a drop-in replacement for Google Translate when privacy matters.

Introduction

LibreTranslate is a self-hostable translation API built on top of Argos Translate, which uses OpenNMT-style transformer models trained on open datasets. It gives you a Google-Translate-compatible HTTP API that runs on your own server — no external calls, no usage caps, no data sent to anyone.

With over 14,000 GitHub stars, LibreTranslate is used by privacy-focused projects, research teams, and developers building translations into self-hosted apps (FreshRSS, Discourse, NextcloudTranslate, Joplin plugins, etc.).

What LibreTranslate Does

LibreTranslate loads Argos Translate models (one per language pair) and exposes REST endpoints: /translate, /languages, /detect. The web UI gives a Google-Translate-like interface. Quality varies by language pair (EN-ES, EN-FR, EN-DE are strong; low-resource languages are weaker than DeepL/Google).

Architecture Overview

[LibreTranslate (Flask)]
      |
[REST API]
   POST /translate, GET /languages, POST /detect
      |
[Argos Translate engine]
   one model per language pair (e.g. en-fr)
   ~100-400 MB per model
      |
[Optional]
   API keys (rate limiting per key)
   Suggestions: crowd-source corrections
   CUDA acceleration (bigger models, faster)

Self-Hosting & Configuration

# Environment flags
docker run -d --name libretranslate \
  -p 5000:5000 \
  -e LT_LOAD_ONLY=en,fr,es,de,ja,zh \
  -e LT_CHAR_LIMIT=5000 \
  -e LT_REQ_LIMIT=100 \
  -e LT_API_KEYS=true \
  -v libretranslate_db:/app/db \
  -v libretranslate_models:/home/libretranslate/.local \
  libretranslate/libretranslate:latest --api-keys

# Pre-generate API keys
docker exec libretranslate ltmanage keys add 100000        # 100k char limit per hour

# Use the key in requests
curl -X POST http://localhost:5000/translate \
  -H "Content-Type: application/json" \
  -d '{"q":"Hello","source":"en","target":"zh","api_key":"YOUR_KEY"}'
# Python client
import requests
res = requests.post("http://localhost:5000/translate", json={
    "q": "The quick brown fox jumps over the lazy dog.",
    "source": "en",
    "target": "ja",
})
print(res.json()["translatedText"])

# Batch translation
res = requests.post("http://localhost:5000/translate", json={
    "q": ["Hello", "world", "How are you?"],
    "source": "en",
    "target": "fr",
})
print(res.json()["translatedText"])

Key Features

  • Self-hosted — no external API, no data sent to third parties
  • Google-Translate-like API — compatible with most apps expecting a translation endpoint
  • 40+ languages — major world languages plus low-resource additions
  • API keys + rate limits — per-key limits for multi-tenant deployments
  • Web UI — drop-in replacement for Google Translate's homepage
  • Batch translation — array input for bulk jobs
  • CUDA support — GPU acceleration for faster translation
  • File translation — upload .txt, .docx, .pdf, .html and get translated output

Comparison with Similar Tools

Feature LibreTranslate Google Translate API DeepL API NLLB-200 direct Mozilla Firefox Translator
Self-host Yes No No Yes (custom) On-device
Data privacy Best Limited Good (EU) Best Best
Quality Good (major pairs) Best Best (supported pairs) Very good Good
Cost Free $20/M chars $25/M chars Free (runs on your GPU) Free
Languages 40+ 130+ 30+ 200 30+
Best For Self-hosted apps Maximum quality European languages Research / custom Browser-only

FAQ

Q: LibreTranslate vs DeepL quality? A: DeepL is notably better for European language pairs (DE, FR, ES, IT, etc.). LibreTranslate is close-enough for non-critical translations and significantly better than nothing for privacy-sensitive scenarios.

Q: What hardware do I need? A: CPU-only is fine for personal use. For heavier workloads: a GPU with 4–8GB VRAM (or CPU with 8+ cores) handles thousands of translations per minute.

Q: Can I train a custom model? A: Yes via the Argos Translate training pipeline. It's non-trivial (dataset prep, SentencePiece tokenizer training, OpenNMT training) but doable on a modest GPU.

Q: Does it support file uploads? A: Yes — /translate_file endpoint. Supported formats: .txt, .docx, .pdf, .odt, .html, .xml. Output matches input format.

Sources

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets