Please note, ML Lab features have become experimental, and support beyond published documentation is very limited.
Overview
Runway makes it possible to use the output of one model as the input to another directly in the application interface. Using this feature, you can make powerful workflows that chain together multiple models with minimal fuss and without writing any code.
This document will guide you through a few sample use cases of model chains. When you're done reading, you'll have all the know-how you need to build your own model chains from scratch.
Data Types
Before showing how to set up a model chain in Runway, it's important to note that you can only chain two models together if the first model's output data type exactly matches the second model's input data type. The data types that a model uses as its input and output are shown in the Characteristics info box in the Model View for the model in question.
If the data types don't match between two models, you won't be given the option in the Runway interface to connect them. It's possible that you'll find two models that don't work together in the Runway app even though it seems like they should. If this happens, let us know! We're working hard to make all of the models chaining friendly.
You can read more about the kinds of data types that Runway supports in the SDK documentation.
Tutorial 1: The Depths of Imaginary Bedrooms
As a simple example of model chaining, let's consider chaining together StyleGAN and DenseDepth. The StyleGAN model generates images from a latent space, given an input vector. (In the Runway application's input area, you can "explore" this latent space as an image grid.) The DenseDepth model estimates 3D depth for a 2D image, showing that depth map as a monochrome image (darker areas indicate parts of the source image that are nearer to the camera).
Now, imagine that you want to use DenseDepth to estimate 3D depth of images generated by StyleGAN, to produce depth maps for any image that StyleGAN generates.
Normally, if you wanted to estimate depth for images that you generated with StyleGAN, you'd have to first export those images to a folder on your hard drive, then upload those images to DenseDepth as a second step. However, because the output data type of StyleGAN matches the input data type of DenseDepth, the Runway application facilitates "chaining" the output of the first model to the input of the second, saving you a little bit of work (and hard drive space).
So let's build this chain in the application! To get started, open Runway and use the Browse Models interface to add the StyleGAN model and the DenseDepth model to a new workspace.
To create the chain, first select the DenseDepth model in the left-hand sidebar. At the top of the Input area, you can select the source for the image data that the Runway application will send to the model. The Camera option uses your webcam, and the File option reads image data from a file on your hard drive. You should see a third option to the right of Camera and File that reads "Model Output." This option lets you select other models in the workspace with outputs that match the data type of the current model's input. You should see your StyleGAN model in that dropdown. Select it.
Once you've selected StyleGAN as an input, you'll notice the "chain" icon in the entry for DenseDepth in the left-hand sidebar is active. This is to remind you that the model's input is chained to the output of another model.
You're ready to go! Start both of the models. Once the models are running, you should be able to select an item in the Vector input panel of the StyleGAN model, then select the DenseDepth model in the sidebar. You'll see that the input area of DenseDepth shows the image you just selected in StyleGAN, and the output area has the DenseDepth output inferred from the StyleGAN image. Success!
Tutorial 2: Reconstructing Drawings with Segmentations, im2txt and AttnGAN
Chaining two models isn't cool. You know what's cool? Chaining three models. Or more!
Let's say I want to (a) generate an image from a semantic segmentation using SPADE-COCO; then (b) generate a caption for this image using im2txt and (c) generate a new image from that caption using AttnGAN---like a game of Exquisite Corpse but with machine learning. That's three different models with several different data types: segmentation, image, and text.
To make this happen, create a new empty workspace and add the three models mentioned above: SPADE-COCO, im2txt, and AttnGAN. Select the im2txt model in the sidebar and choose SPADE-COCO as the input. And then select AttnGAN in the sidebar and choose im2txt as the input.
Now go back to the SPADE-COCO model and draw something in the input area with the tools provided in the right-side panel. When you're satisfied, run all of the models in the chain. Once the models have all started and completed inference, you should see the caption for the SPADE-COCO-generated image in the output area of the im2txt model, and your "reconstructed" image in the output area of the AttnGAN model.
Next Steps
You now have the knowledge you need to connect models together in Runway. We hope you find this feature useful! Now that you know the basics, here are some other model combinations you might try:
- Use MaskRCNN to mask out the backgrounds of images and then send them to PhotoSketch to make line drawings.
- Use the output of im2txt as a "seed" for GPT-2 text generation.
- Create a feedback loop with SPADE-COCO and DeepLab to produce infinite hallucinogenic imagery.
Note that all of Runway's other features are still available when using model chains, such as the ability to work with entire directories of images, exporting data to CSV/JSON, the network interfaces, etc. Have fun!
Related Technical Support Resource: Understanding How Chaining Works