Cette page est affichée en anglais. Une traduction française est en cours.
ScriptsMar 29, 2026·2 min de lecture

Diffusers — Universal Video & Image Generation Hub

Hugging Face's diffusion model library. Run CogVideoX, AnimateDiff, Stable Video Diffusion, and 50+ video/image models with a unified API. 33,200+ stars.

Introduction

Hugging Face Diffusers is the universal hub for running diffusion models — including 50+ video generation models (CogVideoX, AnimateDiff, Stable Video Diffusion, Wan Video, LTX-Video, and more) through a single unified API. Pre-trained models download automatically from Hugging Face Hub. 33,200+ GitHub stars.

Best for: AI researchers, developers building video generation features, anyone wanting to run state-of-the-art video models locally Works with: Python, PyTorch, NVIDIA GPUs (CUDA)

Part of the Video AI Toolkit collection.


Supported Video Models

Model Type Quality
CogVideoX Text-to-video High
AnimateDiff Image-to-video High
Stable Video Diffusion Image-to-video High
Wan Video Text-to-video High
LTX-Video Text+audio-to-video High
ModelScope Text-to-video Good

Image Generation Models

Also supports 100+ image models: Stable Diffusion, SDXL, Flux, Kandinsky, PixArt, and more.

FAQ

Q: What is Diffusers? A: Hugging Face's Python library for running diffusion models with a unified API. Supports 50+ video generation models and 100+ image models. 33,200+ stars.

Q: Is Diffusers free? A: Yes. Apache 2.0 licensed. Models are freely downloadable from Hugging Face Hub. Requires GPU for practical use.


🙏

Source et remerciements

Created by Hugging Face. Licensed under Apache 2.0. diffusers — ⭐ 33,200+ Docs: huggingface.co/docs/diffusers

Discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires