Scripts2026年3月29日·1 分钟阅读

Diffusers — Universal Video & Image Generation Hub

Hugging Face's diffusion model library. Run CogVideoX, AnimateDiff, Stable Video Diffusion, and 50+ video/image models with a unified API. 33,200+ stars.

介绍

Hugging Face Diffusers is the universal hub for running diffusion models — including 50+ video generation models (CogVideoX, AnimateDiff, Stable Video Diffusion, Wan Video, LTX-Video, and more) through a single unified API. Pre-trained models download automatically from Hugging Face Hub. 33,200+ GitHub stars.

Best for: AI researchers, developers building video generation features, anyone wanting to run state-of-the-art video models locally Works with: Python, PyTorch, NVIDIA GPUs (CUDA)

Part of the Video AI Toolkit collection.


Supported Video Models

Model Type Quality
CogVideoX Text-to-video High
AnimateDiff Image-to-video High
Stable Video Diffusion Image-to-video High
Wan Video Text-to-video High
LTX-Video Text+audio-to-video High
ModelScope Text-to-video Good

Image Generation Models

Also supports 100+ image models: Stable Diffusion, SDXL, Flux, Kandinsky, PixArt, and more.

FAQ

Q: What is Diffusers? A: Hugging Face's Python library for running diffusion models with a unified API. Supports 50+ video generation models and 100+ image models. 33,200+ stars.

Q: Is Diffusers free? A: Yes. Apache 2.0 licensed. Models are freely downloadable from Hugging Face Hub. Requires GPU for practical use.


🙏

来源与感谢

Created by Hugging Face. Licensed under Apache 2.0. diffusers — ⭐ 33,200+ Docs: huggingface.co/docs/diffusers

讨论

登录后参与讨论。
还没有评论,来写第一条吧。

相关资产