Scripts2026年3月29日·1 分钟阅读

Open-Sora — Open-Source Text-to-Video Generation

Open-source alternative to Sora by HPC-AI Tech. Generate videos from text prompts with an 11B parameter model. Apache 2.0 licensed. 28,800+ stars.

介绍

Open-Sora is an open-source video generation framework aiming to democratize efficient video production. Built by HPC-AI Tech, it features an 11B parameter model capable of text-to-video and image-to-video generation. Unlike closed-source alternatives (Sora, Runway), Open-Sora is fully trainable and customizable. 28,800+ GitHub stars.

Best for: AI researchers, video generation startups, developers building custom video generation pipelines Works with: Python, PyTorch, NVIDIA GPUs Setup time: 15 minutes (requires GPU)

Part of the Video AI Toolkit collection.


Key Capabilities

  • Text-to-video: Generate videos from text descriptions
  • Image-to-video: Animate static images
  • Variable resolution: 240p to 720p+
  • Variable duration: 2s to 16s clips
  • Fully trainable: Fine-tune on your own data
  • Apache 2.0: Commercially usable

Architecture

Based on DiT (Diffusion Transformer) with spatial-temporal attention, trained on large-scale video datasets. Uses a VAE for video encoding and a text encoder for prompt understanding.

FAQ

Q: What is Open-Sora? A: An open-source text-to-video generation framework with an 11B parameter model, designed as an accessible alternative to OpenAI's Sora. 28,800+ GitHub stars, Apache 2.0 licensed.

Q: Is Open-Sora free? A: Yes. Open-Sora is Apache 2.0 licensed. You need your own GPU hardware for inference (recommended: NVIDIA A100 or equivalent).


🙏

来源与感谢

Created by HPC-AI Tech. Licensed under Apache 2.0. Open-Sora — ⭐ 28,800+

讨论

登录后参与讨论。
还没有评论,来写第一条吧。

相关资产