Gen-2: The Next Step
Forward for Generative AI

A multimodal AI system that can generate novel videos with text, images or video clips.
Watch the Explainer
No lights. No camera.
All action.
Realistically and consistently synthesize new videos. Either by applying the composition and style of an image or text prompt to the structure of a source video (Video to Video). Or, using nothing but words (Text to Video).
Top-down-drone-shot of icebergs with muted colors
Driving Prompt
Bringing the magic
back to making movies.
Learn more about the different ways Gen-2 can turn any image, video clip or text prompt into a compelling piece of film.
Mode 01: Text to Video
Synthesize videos in any style you can imagine using nothing but a text prompt. If you can say it, now you can see it.
The late afternoon sun peeking through the window of a New York City loft.
Mode 02: Text + Image to Video
Generate a video using a driving image and a text prompt
Input Image
A low angle shot of a man walking down a street, illuminated by the neon signs of the bars around him.
Driving Prompt
Output Video
Mode 03: Image to Video
Generate video using just a driving image (Variations Mode)
Input Image
Output Video
Mode 04: Stylization
Transfer the style of any image or prompt to every frame of your video.
Source Video
Driving Image
Generated Video
Mode 05: Storyboard
Turn mockups into fully stylized and animated renders.
Mode 06: Mask
Isolate subjects in your video and modify them with simple text prompts.
Input Video
A dog with black spots on white fur.
Driving Prompt
Output Video
Mode 07: Render
Turn untextured renders into realistic outputs by applying an input image or prompt.
Mode 08: Customization
Unleash the full power of Gen-2 by customizing the model for even higher fidelity results.
Training Images
The New Standard for Video Generation
Based on user studies, results from Gen-1 are preferred over existing methods for Image to Image and Video to Video translation.
73.53%
Preferred over Stable Diffusion 1.5
88.24%
Preferred over Text2Live
A New Era for Motion (and) Pictures
Runway Research is dedicated to building the multimodal AI systems that will enable new forms of creativity. Gen-2 represents yet another of our pivotal steps forward in this mission.

AI systems for image and video synthesis are quickly becoming more precise, realistic and controllable. Runway Research is at the forefront of these developments and is dedicated to ensuring the future of creativity is accessible, controllable and empowering for all.

Gen-1
Structure and Content-Guided Video Synthesis with Diffusion Models
Runway Research
Patrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Granskog, Anastasis Germanidis
Year
2023
Gen-2
Text driven video generation
Year
2023
Paper
Coming Soon
Runway Research 2023 | All rights reserved.

Runway Research:
Making the impossible

The magic of Runway takes cutting edge research in artificial intelligence and machine learning. All done in-house and in collaboration with leading institutes world-wide.
Explore Careers