Stable Video Diffusion: How to Use It and What It Is?

Stability Releases Groundbreaking New AI Tool: Stable Video Diffusion

Stability is thrilled to announce the launch of Stable Video Diffusion, marking a major step forward in generative AI. Leveraging the company's expertise in image diffusion models, Stable Video Diffusion is the first-ever release capable of generating high-quality video sequences directly from text.

Powered by the latest advances in self-supervised learning, Stable Video Diffusion capitalizes on massive neural networks to rapidly synthesize realistic videos for any prompt or reference material. This enables endless new possibilities for visual storytelling and content creation across industries.

Available now as a research preview, Stable Video Diffusion comes with all source code openly accessible on GitHub. Researchers and developers can also find pretrained weights hosted on Hugging Face to begin experimentation.

Stability is dedicated to advancing AI safety and transparency. Their team of PhD scientists have already published detailed findings on how Stable Video Diffusion upholds rigorous technical standards.

What is Stable Video Diffusion

Stable Video Diffusion is a latent video diffusion model for high-resolution, state-of-the-art text-to-video and image-to-video generation. Some key points:

It is based on diffusion models and can generate a coherent sequence of video frames from a single input image or text description.

The model employs a latent diffusion process where it begins with noise and iteratively refines it to produce the output video.

It was trained using a three-stage approach:

1) Image pretraining on image diffusion models to initialize spatial layers.

2) Video pretraining on large-scale video data to learn motion representations.

3) High-quality video finetuning on smaller high-res video datasets.

It can produce state-of-the-art results for tasks like text-to-video, image-to-video generation, frame interpolation and multi-view video synthesis based on the powerful video representations learned during training.

"A robot dj is playing the turntables, in heavy raining futuristic tokyo, rooftop, sci-fi, fantasy'
"A robot dj is playing the turntables, in heavy raining futuristic tokyo, rooftop, sci-fi, fantasy'
"An exploding cheese house"
"An exploding cheese house"
"A tiny finch on a branch with spring flowers on background"
"A tiny finch on a branch with spring flowers on background"
"A steam train moving on a mountainside by Vincent van Gogh"
"A steam train moving on a mountainside by Vincent van Gogh"

 

How to Use Stable Video Diffusion

  1. Data processing and curation:
  • Apply cut detection to split videos into coherent clips.
  • Annotate clips with synthetic captions from image captioning models.
  • Filter clips based on optical flow (for motion), OCR detection (for text) and CLIP/aesthetic scores.
  1. Training stages:
  • Stage I: Initialize spatial layers from pretrained Stable Diffusion 2.1 image model.
  • Stage II: Pretrain on large curated video dataset (152M clips) at 256x384 resolution.
  • Stage III: Finetune pretrained model on high-quality video dataset at higher 576x1024 resolution.
  1. Downstream tasks:
  • Text-to-video generation by conditioning on text prompts.
  • Image-to-video generation by conditioning on input image.
  • Frame interpolation to increase frame rate.
  • Multi-view video generation by finetuning on a multi-view dataset.
  1. Controls:
  • Train LoRAs to control camera motion for image-to-video generation.
  • Increase frame rate by predicting interpolated frames.

Step of Use to Stable Video Diffusion

Here are the basic steps to use the Stable Video Diffusion model:

  1. Download the pretrained model weights from the release on GitHub:

https://github.com/Stability-AI/generative-models

  1. Install the video diffusion Python package:
pip install video-diffusion
  1. Import the model class and load the pretrained weights:
from video_diffusion import pretrained_models
model = pretrained_models.load_stable_video_diffusion()
  1. Generate videos from text prompts:
samples = model.generate_video_from_text(prompt)
  1. Generate videos from images:
samples = model.generate_video_from_image(image)  
  1. Control camera motion with LoRAs:
samples = model.generate_video_from_image(
  image, 
  camera_motion='zoom'
)
  1. Interpolate frames to increase frame rate:
samples = model.generate_interpolated_frames(
  original_video, 
  target_fps=60
)
  1. Finetune the model on your own datasets for custom tasks.

Stable Video Diffusion Examples for Users

Text-to-video generation:

  • Provide a text prompt describing a scene or action to generate a video clip. For example "A dog is running on a beach".

Image-to-video generation:

  • Upload an image and it will predict a sequence of future frames to create a short animated video with motion.

Frame rate upsampling:

  • Take a video recorded at 30 FPS and upsample it to 60 FPS by predicting intermediate frames to make the motion smoother.

Add camera motion control:

  • Use LoRA modules to influence the camera motion for image-to-video. For example zoom in on an object or pan across a scene.

Multi-view generation:

  • Provide images of an object from different angles to generate a consistent 3D animation rotating around it.

Finetune for a domain:

  • Collect images/videos from a specific category like science experiments. Finetune the model to generate videos in that domain.

Green screen compositing:

  • Replace the background of an input image with a new scene to composite subjects into imaginary locations.

Style transfer:

  • Apply the style of cartoon/anime to human subjects in photos to create a stylized animation.

The Application of Stable Video Diffusion

  • Video Generation Assistance: It can help creators quickly generate draft video sequences to develop ideas or visualize concepts in early stages of video production.
  • Education & Training: Automatically generate instructional video content from detailed text lessons or annotate slide decks. Help teach complex processes or concepts visually.
  • Media & Entertainment: Prototype scenes, locations, character animations for movies, games, VR/AR before full production. Generate storyboards, trailers or promotional videos.
  • Websites & Advertising: Dynamically create product demonstration or explainer videos from product images/descriptions on e-commerce sites. Generate ads by combining text, images.
  • News & Documentaries: Auto-generate supplemental video content like news packages, recaps by combining archives of text, photos from events.
  • Social Media: Power interactive tools for users to generate short entertaining videos from prompts or style their selfies in virtual worlds.
  • Archival & Cultural Preservation: Bring historical text/images to life by generating animations and video depictions for education.
  • Scientific Visualization: Automatically illustrate complex papers, medical procedures or engineering workflows through data-driven video generation.
  • UI & UX Prototyping: Prototype app tutorials, walkthroughs or design ideas quickly through interactive video generation before development.

Competitive in Performance

Stable Video Diffusion provides two image-to-video generative models capable of producing 14 and 25 frames respectively at customizable frame rates ranging from 3 to 30 frames per second. At the initial release in their basic form, external evaluation found that these models surpassed contemporary proprietary alternatives according to user preference studies.

Stable Video Diffusion Competitive in Performance

The Technical Principle of Stable Video Diffusion

  • Diffusion Models: It uses diffusion probabilistic models which generate samples by reversely diffusing noise through a learned timestep. This allows controllable, high-fidelity generation.
  • Latent Space Mapping: The model is trained to map conditioning inputs like text/images to a latent space, from where samples can be efficiently generated. This reduces computation versus direct pixel-level generation.
  • Pretraining: Spatial layers are initialized from a pretrained image diffusion model like Stable Diffusion. It provides strong visual representations to build upon.
  • Video Architecture: Temporal convolutional and attention layers are inserted after every spatial layer to enable modeling temporal dependencies across frames.
  • Training Strategy: It leverages separate stages - image pretraining, video pretraining on large curated datasets, and high-resolution video finetuning for best performance.
  • Data Curation: Techniques like captioning, filtering based on motion, text detection etc. are used to build a "clean" large-scale video pretraining dataset.
  • Conditioning: Text, images or frames are encoded as micro-batches and concatenated across model layers to guide generation.
  • Latent Space Interpolation: Frames can be synthesized by interpolating in latent space between conditioning embeddings for smooth motion.
  • Control Modules: Modules like LoRAs can be plugged in to model domain-specific factors like camera motion.

Who Invented Stable Video Diffusion

  • Andreas Blattmann - AI researcher who led the development of Stable Video Diffusion. He specializes in generative modeling and model robustness.
  • Tim Dockhorn - Researcher who worked on model architectures, training procedures and evaluation methods.
  • Sumith Kulal - AI safety engineer who helped build the curated video datasets and training infrastructure.
  • Daniel Mendelevitch - Researcher focused on applications and controllability of generative models.
  • Dominik Lorenz - Research engineer who developed model deployment and fine-tuning systems.
  • Yam Levi - Researcher who investigated techniques for improving temporal consistency.
  • Adam Letts - Director of Research at Stability AI who oversaw the project.
  • Varun Jampani - Research manager who helped guide modeling approaches.
  • Zion English - AI safety research manager at Stability AI.
  • Robin Rombach - Founder and CEO of Stability AI. Provided vision and resources for the project.
  • Vikram Voleti - Researcher who explored multi-view generative capabilities.

How Much Data Is Used to Train

Stage I (Image Pretraining): This uses the publicly available pretrained Stable Diffusion 2.1 image model, which was trained on a huge corpus of images from the internet.

Stage II (Video Pretraining): They collected an initial video dataset (called Large Video Dataset or LVD) comprising over 580 million video clips totaling 212 years of video content.

This raw LVD was then processed and filtered using various techniques like cut detection, optical flow filtering, captioning etc. This resulted in a final "curated" dataset of 152 million training examples used for video pretraining.

Stage III (Video Finetuning): They finetuned the pretrained video model on a much smaller dataset of 250,000 high-quality, pre-captioned video clips for high-resolution generation.

Why Is Stable Video Diffusion so Good?

Large Pretrained Models
It leverages huge pretrained image and video models trained on massive public data sources. This gives it strong generalized representations to build on.

Systematic Data Curation
The paper introduces curation techniques like captioning, filtering to build high quality datasets. This is crucial for large-scale video model training.

Multi-Stage Training
It uses separate stages for image pretraining, video pretraining, and high-res finetuning. This staged approach is optimized for the task.

Temporal Modeling Architectures
The insertion of temporal layers after every spatial layer enables effective modeling of frame dependencies.

Controllable Generation
Techniques like text/image conditioning and LoRA modules provide flexible controls over generated videos.

Strong Generalization
The models trained this way generalize well to many video generation tasks beyond their direct training objective.

High Fidelity Samples
The samples produced are of cinematographic quality even for complex tasks like text-to-video generation.

Open Sourced Code
The research models and code are openly available, enabling others to build upon this benchmark work.

Additional Text-to-Video samples

Stable Video Diffusion Additional Text-to-Video samples

Captions from top to bottom: “A hiker is reaching the summit of a mountain, taking in the breathtaking panoramic view of nature.”, “A unicorn in a magical grove, extremely detailed.”, “Shoveling snow”, “A beautiful fluffy domestic hen sitting on white eggs in a brown nest, eggs are under the hen.”, and “A boat sailing leisurely along the Seine River with the Eiffel Tower in background by Vincent van Gogh”.

The Limitations

  • Data requirements - It relies on vast amounts of external image and video data which may not always be available. The quality depends on the training datasets.
  • Computational resources - Training the large models requires massive computing power not available to all researchers and individual users. Generation is faster but still slower than optimized algorithms.
  • Lack of common sense - Like most AI systems today, it lacks true common sense understanding of the world and can generate outputs that don't make logical sense.
  • Temporal coherence - While improved, generated videos may still lack perfect frame-to-frame consistency in things like object positions over time.
  • Originality - The generated samples are based on datasets and thus cannot produce truly new concepts not present in training data. Creativity is limited.
  • Biases in data - The models can potentially reflect and even amplify any social biases present in the training datasets.
  • Control challenges - Perfectly controlling attributes like camera motion, lighting in generated videos remains difficult.
  • Privacy/Ethical concerns - The systems could potentially enable generation of fake media at large scale, leading to disinformation if misused.
  • Text conditioning - While strong, text prompts may still be ambiguous and not produce the exact scene or motion as intended.

Try Stable Video Diffusion For Free Now

Today, you can sign up for Stable Video Diffusion waitlist here to access a new upcoming web experience featuring a Text-To-Video interface

Use Anthropic's Web Demo
Anthropic has launched a free web demo of Stable Video Diffusion at https://stablediffusion.com. You can generate videos from text prompts instantly in the browser.

Sign Up for Anthropic PBC
Anthropic PBC, founded by Dario Amodei (one of the creators of Constitutional AI), is making AI safety research and models more accessible. You can apply for an account to use their Colab notebook with Stable Video Diffusion.

Try Runway AI's App
Runway AI has integrated Stable Video Diffusion into their no-code AI app. You can generate videos and images through a simple graphical interface without code. They offer a free tier.

Use Stability AI's Notebook
Stability AI shared a Google Colab notebook to try out the model. It allows generating videos from prompts but requires a GPU to run efficiently.

Wait For Other Services
Services like Vidnamic, DeepCrowd, etc that provide access to generative models may integrate Stable Video Diffusion soon in their products.

Try ForStable Video Diffusion

9 thoughts on “Stable Video Diffusion: How to Use It and What It Is?”

  1. Ой, как бы могло облегчить работу, если научить эту нейроночку делать 8-битные анимации.

    Reply
  2. 16 гигов, прям очень много для минимального порога вхождения, особенно учитывая стоимость видеокарт. Но лайк поставлю и смотреть за видео по этой нейросети от тебя буду)

    Reply
  3. Я так понимаю обучение нейройни для видео с учетом контекста выглядит примерно следующим образом:
    видео разбивается покадрово на изображения,
    затем эти картинки скармливаются нейронке, но при этом так, чтобы между ними сохранялась связь.
    Вся эта процедура повторяется примерно дохреналион раз,
    после чего картинка или промт, который мы подсовываем, анализируется нейронкой, находится по кусочкам в базе и выдается нам результат с учетом контектса, который присутствовал в видео изначально скормленного нейронке.

    Reply
  4. Wow! The open source video AI with that fidelity is a huge leap in the direction of real world applications for movies etc. I cannot wait to see how far it goes by this time next year. Who knows, maybe we’ll all be uploading our own versions of old stories and new stories to YouTube etc and basking in the Text to video glory. The next hurdle is consumer grade hardware and VRAM tech accessibility and advancements. I’ve seen news that atom sized transistors/chip parts are being developed and worked on but their limits is fragile material currently. Once that is perfected we’ll probably all have like 1000GB vram hopefully lol. I’m just wishful thinking but hey one can dream amirite?

    Reply
  5. I’d really love it if ‘Stable Diffusion’ would come up with ’text to 3D’ and ‘image to 3D’ and ‘3D to image’. I hope that comes next!

    Reply

Leave a Comment