Skip to main content
Seedance 2.0 is currently not available to customers in the U.S., but access is coming soon. This feature is currently available to users on a Standard plan or higher.

Introduction

Seedance 2.0 is a third-party model available in Runway that generates video from text prompts, reference images, audio, and video inputs. The model supports director-level control over camera movement, lighting, and character performance, and produces audio-visual output with synchronized sound — all within a unified multimodal architecture—meaning that you set the role of your inputs. 

This article covers how to access the tool, input best practices, available settings, and what to expect from your generations.

Notice: Seedance 2.0 is likely to moderate inputs that depict realistic humans. For best results, use stylized characters or avoid inputs that depict the faces of real or photorealistic people.


Spec information

Unlimited generations in Explore Mode

Yes

Duration

5-15 seconds

Aspect ratio

21:9
16:9
4:3
1:1
3:4
9:16

Output resolutions

480p, 720p

Supported inputs

Text, Image, Video, Audio

Modalities

Text to Video
Image to Video
Video to Video
Audio to Video (Coming soon)

Input requirements Reference image Reference video Reference audio
Maximum inputs 5 3
*total duration of videos must be under 15 seconds

Coming soon

Maximum file size < 30 MB each < 50 MB each
Maximum duration N/A ≤ 15 seconds
Supported file types .jpg, jpeg, .png, .webm .mp4, .mov
Dimension requirements > 300 px
< 6000 px
≤ 720p

 

Step 1 — Selecting the inputs

Begin by navigating to the web app. There are two ways to navigate to the Seedance 2.0 model:

  • Search for it beneath the What do you want to create? bar 
  • Select Video, then select Seedance 2.0 from the model dropdown

 

Choosing the creation mode

Seedance 2.0 offers three creation modes, References, Start/End frames and Text to Video:

  • References Best for blending elements from multiple images or videos into a single generation. Use when you want granular control over what gets pulled from each input.
  • Start / End frames — Best for traditional image-to-video (start frame only) or keyframe control (start and end frames). Use when you need precise control over how a shot begins or ends.
  • Text to video — Best for generating without any image or video inputs. Also a good option when working with realistic human subjects, as it tends to avoid the moderation constraints that can come with image-based inputs.

 

Step 2 — Writing the prompt

Your prompt will vary depending on both your creation mode and what you're trying to accomplish. In any creation mode or model, we recommend using positive, unambiguous, and outcome-focused language.

 

Using the reference inputs

When using Reference mode, write your prompt as if you're describing the scene or sequence you want to appear, or the changes you want made, as well as how to use each provided input.

References are flexible by design. Whether you're working with images or videos, you control what gets pulled from each input and how it's used. 

  • With images, you might use one reference for a subject and another for a background, or instruct the model to use a frame at a specific point in the video. 
  • With videos, you can preserve the motion while changing the style, or keep the structure while swapping out characters entirely.

Below are examples of different use cases that use a combination of image and video references, their prompt, and the results of the generation.

Use case Prompt Input(s) Result
Generating multi-shot video from image(s) multishot video. the woman realizes that she forgot there was a test that day. watercolor animation style
panic.png
Seedance 20 - the woman realizes that she forgot there was a test that day watercolor animation st.gif
Generating single-shot video from image(s) use Image 1 as the starting frame for a single, continuous shot in freeze time. the camera dramatically weaves through the completely frozen scene
fight.png
Seedance 20 - use Image 1 as the starting frame for a single_ continuous_ freeze time shot the cam.gif
Determining placement of a reference image use Image 1 as the first frame. the man slyly smiles and says "well, i guess i'll catch you later" before leaping out the airplane. a parachute deploys while he's midair.
slyman.png
Seedance 20 - use Image 1 as the first frame the man slyly smiles and says well_ i guess i_ll catc.gif
Swapping a character or object in a video replace the male knight in Video 1 with a woman. fiery red hair.
Seedance 20 - use Image 1 as the starting frame for a single_ continuous_ freeze time shot the cam.gif
Seedance 20 - replace the male knight in Video 1 with a woman knight fiery red hair.gif
Restyling an existing video relight Video 1 to dusk with a purple sky. colorize Video 1
Seedance 20 - cinematic multishot of the car performing tricks in the vast field.gif
Seedance 20 - relight Video 1 to dusk with a purple sky colorize Video 1.gif
Animating a storyboard use Image 1 as a storyboard to guide the scenes
Concept 1-Runway_Create_a_3x3_live_action_040626 (3).png
Seedance 20 - Use Image 1 as a storyboard to guide the scenes.gif
Apply camera motion to a new scene apply the camera motion from Video 1 to the scene in Image 1
house480.gif
THANKYOU.png
Seedance 20 - Apply the camera motion from _720 to the scene in Image 1.gif

 

Step 3 — Generating the Video

Click the Generate button to start the generation. The total processing time will depend on your inputs and selected output duration, but typically takes been 2-10 minutes to complete once started.

 

Troubleshooting

In the event of generation errors, review the details below to learn more about troubleshooting:

Unable to start a generation

If you're unable to start a generation because the button is greyed out, this is due to the references not meeting the model's input requirements.

Hover over the greyed Generate button to learn more about the required adjustments and review the Spec table for more details on input requirements.

Note: Video preview durations on Runway are rounded to the nearest second. Some models generate videos slightly longer than displayed — for example, a video that appears as 00:05 may have a true duration of 00:05:01. These extra milliseconds can push your input over the limit even when the preview appears to be within the maximum 15 seconds.

Generation failed during processing

In most cases, a generation failing is the result of third-party content policies. Third-party restrictions can be indentified by the error referencing the specific model, such as The request was blocked by Seedance 2.0. Please update your input and try again.

Given the realism of the model, Seedance 2.0 has heavy restrictions around using realistic humans in the inputs. Inputs images or videos that contain a realistic human, especially those that show a face, are likely to be blocked by the provider.

Since the moderation is managed by the provider, Runway is unable to lift or lessen these provider-level restrictions. To continue generating, you must update your inputs to comply with the restrictions:

  • Avoid input images or videos that contain a realistic human
  • Try using obscured or blurred faces to allow the model to fill in the blanks for your character
  • Try using an unrealistic style, such as an animated or 3D rendered style for inputs

Please note that repeated attempts to re-run the same moderated inputs may increase your likelihood of automated account suspension. Please update your inputs after encountering a moderation error before generating again.