Skip to main content

Search

Creating with Act-One on Gen-3 Alpha and Turbo


Introduction

Gen-3 Alpha is the first of upcoming models that offer improvements in fidelity, consistency, motion, and speed over previous generations of models.

Act-One allows you to bring a character reference image or video to life by uploading a driving performance to precisely influence expressions, mouth movements, and more.

In this article, driving performance refers to the video that will influence an image or video. Character image/video refers to the input that will be animated by the driving performance.

This article outlines how to use Act-One on Gen-3 Alpha, input best practices, the available settings, and more.

 

Spec Information

Spec Gen-3 Alpha Gen-3 Alpha Turbo
Cost 10 credits per second, 50 credit minimum 5 credits per second, 25 credit minimum
Maximum output duration 30 seconds
Explore Mode on Unlimited Plans Yes
Platform availability Web, iOS app
Base prompt inputs Video
Image
Output resolutions 1280x768
768x1280
Frame Rate (FPS) 24fps

 

Best Practices for Act-One Input

Before diving in, review these best practices to ensure that your input selections will set your generation up for success. Most output issues can be addressed by using inputs that follow these recommendations.

Driving Performance

  • Well-lit with defined facial features 
  • Single face framed from around the shoulders and up
  • Forward-facing in the direction of the camera
  • Face is in frame for the entire video
    • Ensure the face doesn't move in and out of the frame
  • Clear mouth movement and expressions
    • Certain expressions, such as sticking out a tongue, are not supported
  • Minimal body and head movement (when using a character image)
  • No face occlusions in frame
  • No cuts that interrupt the shot
  • Follows our Trust & Safety standards

Character Images

  • Well-lit with defined facial features
  • A single face framed from around the shoulders and up
  • Forward-facing in the direction of the camera
  • Follows our Trust & Safety standards

Character Videos

  • Face is in frame for the entire video
  • No cuts that interrupt the shot
  • Well-lit with defined facial features
  • A single-face framed from around the shoulders and up
  • Forward-facing in the direction of the camera
  • Follows our Trust & Safety standards

 

Step 1 – Uploading the Driving Performance

Begin by navigating to Generative Session in your Dashboard.

From here, make sure the Gen-3 Alpha or Turbo model is selected from the bottom left corner dropdown. You’ll find the Act-One icon in the left hand toolbar:

In the top half of the Act-One window, drag and drop a new video or select an existing video from your Assets to add your driving performance.

Alternatively, you can also record a driving video directly from the web app.

If this is your first time recording a video on Runway, click the Start recording button. Your browser will ask for permission to access the camera and microphone. On a Chrome browser, select Allow to grant permissions:

After the permissions are approved, you can begin recording by selecting the Start recording button.

Center your face in the circle, and then select the red record button to begin a three second countdown before the recording begins.

You can click the stop button to end the recording, or alternatively use the spacebar. Review your video and choose Delete to start over or Use this if you’re satisfied with the recording. Guiding videos recorded in Runway will be saved to your asset folder. 

Driving Performance Reminders

Your driving performance should always be forward-facing, even if the character reference you plan to upload is in a different angle.

You should aim to minimize body and head movement if you plan on later selecting a character image. More head movement is supported in the driving performance when using a character video. 

Preliminary face-detection will run on your driving performance before you’re allowed to generate. 

Below are examples of driving performances and their outputs:

Driving performance Output
jamie_driving.gif
dion_driving.gif

Once your driving performance is uploaded, you’re ready to choose your character reference.

 

Step 2 – Selecting the Character Reference Input

Act-One offers support for both character reference images and videos.

Select the character reference input in the bottom half of the Act-One window. Choose from an existing preset, or switch to the Custom tab to upload your own.

Character Reference Image

Character reference images will provide the most consistent results with minimal head and body movement in the driving performance.

If using a preset image, you can use the resolution switcher to change between landscape and portrait presets:

Alternatively, you can choose either of these resolutions after selecting an image when using a custom input.

Character Reference Videos

Character reference videos support more motion and flexibility in the driving performance. You can use footage recorded externally, or alternatively use a Text/Image to Video generation that features a subject.

When using a driving performance that is longer in duration than the character video, the character video will be reversed (known as a boomerang effect) to accommodate the length of the driving performance. 

If you'd like to avoid the boomerang effect on longer driving performance videos, we recommend first extending a generation before processing the Act-One video.

Example Character Reference Inputs

Act-One can support a wide variety of inputs, but inputs that follow our best practices will provide more consistent results when compared to more experimental inputs.

Below is a chart that outlines our recommendations in more detail. Variations annotated with a ✅ should work well in most cases, ⚠️ may sometimes work or provide unexpected results, and ❌ will likely not provide ideal results in most cases.

This chart isn’t meant to deter experimentation, but rather act as a resource for those who need each generation to be satisfactory. Don’t be afraid to travel outside of these recommendations if you’re looking to push the limits of Act-One.

Category Variation Example Support
Character type Human
Non-human
Character angle Forward-facing/Front view
Profile view
Character distance Shoulders and up
Torso and up
Full body ⚠️
Character silhouette Intermediate
Complex ⚠️

 

Step 3 — Configuring the Motion Intensity

You can configure the Motion Intensity value before generating for additional control over your output.
Click the Settings icon to configure this value:

Settings.png

Motion Intensity defaults to a value of 3. The value can be configured between 1 to 5, where a lower value will result in more stability and a higher value will output more expressive motion.

See examples of the different values across the same inputs below:

Motion Intensity value Output
1
3
5

 

Step 4 – Generating the Act-One Video

You can hover over the duration modal to see the calculated credit cost before generating.

Click the Generate button after confirming that you’re content with the selected inputs and credit costs.

Your video will begin processing in your current session, where each video will be available for review once complete.

Understanding Act-One Pricing

Act-One charges 10 credits per second with a minimum of 5 seconds. This means that driving performance videos under 5s will result in a charge of 50 credits.

After the 5 second minimum, each additional second is charged 10 credits, with partial seconds accounted for and rounded up to the nearest decimal. In example, a 5.6s driving performance would be charged 56 credits.

 

Reiterating and Troubleshooting

Most issues or errors will be specific to your driving performance or character reference inputs and can be resolved by ensuring that the inputs follow the recommended best practices. 

Below is a list of Act-One errors and how to troubleshoot them:

Error Troubleshooting
Unable to detect a human face in your video. Ensure driving performance is properly lit and the face is unobscured and centered in frame.
Unable to detect a human face in your image. Ensure character image follows best practices.
An error occurred while detecting a human face in your video. Please try again later. Ensure driving performance contains minimal body and background movement.
We detected too much movement from your video. Ensure driving performance contains minimal body and background movement.
We detected unusable audio from your video. Ensure the audio of your driving performance complies with our Trust & Safety standards.
This content was flagged by our moderation policy. Ensure the character input complies with our Trust & Safety standards.

 

There may be cases where you don’t encounter an error before generation but receive an issue in your output. These edge cases can generally be resolved by following the best practices or re-running a generation:

Issue Troubleshooting
Face improperly detected Use a character input that follows best practices.
Intermittent artifacts Re-run the generation.