# Faster Whisper — 4x Faster Speech-to-Text > Faster Whisper is a reimplementation of OpenAI Whisper using CTranslate2, up to 4x faster with less memory. 21.8K+ GitHub stars. GPU/CPU, 8-bit quantization, word timestamps, VAD. MIT licensed. ## Install Save as a script file and run: ## Quick Use ```bash # Install pip install faster-whisper # Transcribe audio python -c " from faster_whisper import WhisperModel model = WhisperModel('large-v3', device='cuda', compute_type='float16') segments, info = model.transcribe('audio.mp3') print(f'Language: {info.language} (prob {info.language_probability:.2f})') for segment in segments: print(f'[{segment.start:.2f}s -> {segment.end:.2f}s] {segment.text}') " ``` --- ## Intro Faster Whisper is a reimplementation of OpenAI's Whisper model using CTranslate2, achieving up to 4x faster transcription with comparable accuracy and less memory usage. With 21,800+ GitHub stars and MIT license, it supports GPU and CPU execution, 8-bit quantization for efficiency, batched transcription, word-level timestamps, and Voice Activity Detection (VAD) filtering. On a 13-minute audio sample with the Large-v2 model, Faster Whisper completes in 1m03s vs OpenAI Whisper's 2m23s. **Best for**: Developers needing fast, accurate speech-to-text transcription for audio/video processing **Works with**: Claude Code, OpenAI Codex, Cursor, Gemini CLI, Windsurf **Models**: All Whisper models + Distil-Whisper variants --- ## Key Features - **4x faster**: CTranslate2 engine dramatically reduces transcription time - **Less memory**: Lower VRAM usage than original Whisper implementation - **8-bit quantization**: Further reduce memory with minimal accuracy loss - **Word-level timestamps**: Precise timing for each word in the transcript - **VAD filtering**: Skip silent sections using Silero VAD - **Batched transcription**: Process multiple audio segments in parallel - **Distil-Whisper support**: Compatible with smaller, faster distilled models --- ### FAQ **Q: What is Faster Whisper?** A: Faster Whisper is a CTranslate2 reimplementation of OpenAI Whisper with 21.8K+ stars. It runs up to 4x faster with less memory, supports 8-bit quantization, word timestamps, and VAD filtering. MIT licensed. **Q: How do I install Faster Whisper?** A: Run `pip install faster-whisper`. Then use `WhisperModel('large-v3')` in Python to transcribe audio files with GPU or CPU. --- ## Source & Thanks > Created by [SYSTRAN](https://github.com/SYSTRAN). Licensed under MIT. > [SYSTRAN/faster-whisper](https://github.com/SYSTRAN/faster-whisper) — 21,800+ GitHub stars --- Source: https://tokrepo.com/en/workflows/24576b2c-a9d1-4f7a-9696-b1e5c50a17f3 Author: Script Depot