Key Features
- 4x faster: CTranslate2 engine dramatically reduces transcription time
- Less memory: Lower VRAM usage than original Whisper implementation
- 8-bit quantization: Further reduce memory with minimal accuracy loss
- Word-level timestamps: Precise timing for each word in the transcript
- VAD filtering: Skip silent sections using Silero VAD
- Batched transcription: Process multiple audio segments in parallel
- Distil-Whisper support: Compatible with smaller, faster distilled models
FAQ
Q: What is Faster Whisper? A: Faster Whisper is a CTranslate2 reimplementation of OpenAI Whisper with 21.8K+ stars. It runs up to 4x faster with less memory, supports 8-bit quantization, word timestamps, and VAD filtering. MIT licensed.
Q: How do I install Faster Whisper?
A: Run pip install faster-whisper. Then use WhisperModel('large-v3') in Python to transcribe audio files with GPU or CPU.