Moshi — Real-Time AI Voice Conversation Engine
Open-source real-time voice AI by Kyutai. Full-duplex speech conversation with 200ms latency, emotion recognition, and on-device processing. Apache 2.0 licensed.
What it is
Moshi is an open-source real-time voice AI system built by Kyutai. It supports full-duplex speech conversation, meaning both parties can speak and listen simultaneously, with approximately 200ms latency. It includes emotion recognition and runs on-device without requiring cloud APIs.
Moshi targets developers building voice interfaces, conversational assistants, and accessibility tools who need low-latency, privacy-preserving voice interaction.
How it saves time or tokens
Moshi processes speech on-device, eliminating round-trip latency to cloud speech APIs. The full-duplex architecture removes turn-taking overhead typical of voice assistants. Estimated token usage for this workflow is around 3,900 tokens.
How to use
- Install and start the server:
pip install moshi
python -m moshi.server
- Open
http://localhost:8998in your browser. - Start speaking. Moshi responds in real time with full-duplex audio.
Example
# Install Moshi
pip install moshi
# Start the voice server
python -m moshi.server
# Open browser to http://localhost:8998
# Speak naturally — Moshi responds with ~200ms latency
The browser interface handles audio capture and playback. No additional microphone setup is needed beyond browser permissions.
Related on TokRepo
- AI Tools for Voice — More voice AI tools and engines
- Local LLM Providers — Run AI models locally alongside Moshi
Key considerations
When evaluating Moshi for your workflow, consider the following factors. First, assess whether your team has the technical prerequisites to adopt this tool effectively. Second, evaluate the maintenance burden against the productivity gains. Third, check community activity and documentation quality to ensure long-term viability. Integration with your existing toolchain matters more than feature count alone. Start with a small pilot project before rolling out across the organization. Monitor resource usage during the initial adoption phase to identify bottlenecks early. Document your configuration decisions so team members can onboard independently.
Common pitfalls
- On-device processing requires adequate GPU or CPU resources; low-end machines may experience higher latency.
- Browser audio permissions must be granted for the microphone to work.
- Emotion recognition accuracy varies by language and accent; primary testing has been on English speech.
Frequently Asked Questions
Full-duplex means both the user and Moshi can speak at the same time without interrupting each other. Traditional voice assistants use half-duplex where you must wait for the assistant to finish before speaking.
A GPU accelerates inference and helps maintain the 200ms latency target. CPU-only inference works but with higher latency. For production use, a CUDA-capable GPU is recommended.
Yes. Moshi processes speech on-device and does not require internet connectivity after installation. This makes it suitable for privacy-sensitive deployments.
Moshi's primary language support is English. Additional language support depends on the training data and model weights available. Check the repository for the latest supported languages.
Moshi is released under Apache 2.0, so it can be used commercially. Production readiness depends on your latency and accuracy requirements. Test with your specific use case before deploying.
Citations (3)
- Moshi GitHub— Full-duplex speech with 200ms latency
- Moshi README— Apache 2.0 licensed open-source voice AI
- Kyutai Official Site— On-device processing without cloud APIs
Related on TokRepo
Source & Thanks
Created by Kyutai. Licensed under Apache 2.0.
kyutai-labs/moshi — 8k+ stars
Discussion
Related Assets
Conda — Cross-Platform Package and Environment Manager
Install, update, and manage packages and isolated environments for Python, R, C/C++, and hundreds of other languages from a single tool.
Sphinx — Python Documentation Generator
Generate professional documentation from reStructuredText and Markdown with cross-references, API autodoc, and multiple output formats.
Neutralinojs — Lightweight Cross-Platform Desktop Apps
Build desktop applications with HTML, CSS, and JavaScript using a tiny native runtime instead of bundling Chromium.