Please note, ML Lab features have become experimental, and support beyond published documentation is very limited.
Overview
As of the beta version 0.8.0, you can use your own GPU hardware with Runway. This is an experimental feature ๐งช and not all GPU cards are supported.
System requirements and platform support:
-
A Linux distribution (tested with Ubuntu 18.04, but likely works with other distros and flavors)
-
An NVIDIA GPU that supports CUDA (see a full list of supported GPUs here)
-
Recent NVIDIA drivers (Drivers can be downloaded from NVIDIA's website directly or from your package manager. You must have a driver version recent enough to support CUDA 9, 9.2, and 10. See this version compatibility matrix to make sure your drivers are up to date.)
-
A local installation of nvidia-docker2 (the newest version of the nvidia-container-toolkit is not yet supported, however, both packages can be installed on the same machine and function independently no problem)
WARNING: Ubuntu 18.04's official package repository for NVIDIA drivers has out-of-date drivers. In order to install more recent drivers, add the graphics-drivers
PPA: sudo add-apt-repository ppa:graphics-drivers/ppa && sudo apt-get update.
Then install the latest drivers via Software & Updates > Additional Drivers or via apt
.
We don't offer local GPU support for:
-
MacOS and Windows
-
AMD or Intel Graphics Cards
-
External GPUs (like Blackmagic eGPU or others)
Why isn't MacOS or Windows supported?
Local GPU is only supported on Linux operating systems due to the use of NVIDIA Docker and NVIDIA drivers for Linux. You should try Ubuntu, it's pretty nice ๐
Installation Guide
The following guide to using a local GPU assumes that the reader is loosely familiar with installing software on Linux.
Step 1: Prerequisites
In order to run models on your own GPU hardware, you must fulfill the following requirements:
- You must be using Runway on Linux (Tested on Ubuntu 18.04)
- You must have an NVIDIA GPU that supports CUDA. See a full list of supported GPUs here.
- You must have a recent version of the NVIDIA Linux drivers Download drivers from NVIDIA's website directly or from your package manager. You must have a driver version recent enough to support CUDA 9, 9.2, and 10. See this version compatibility matrix to make sure your drivers are up to date.
- You must have
nvidia-docker2
installed.
This tutorial assumes you've already fulfilled the first two prerequisites above. We'll walk you through installing the latest NVIDIA drivers for your graphics card as well as installing and configuring nvidia-docker2
so that local Runway models can use your GPU hardware. We'll be installing these dependencies on Ubuntu 18.04. If you are using a different distribution or version, the logical steps will still apply, but the detailed commands will be different.
Step 2: Installing Up-to-Date NVIDIA Drivers
Ubuntu 18.04's official package repository for NVIDIA drivers has out-of-date drivers. In order to install more recent drivers, you must add the graphics-drivers PPA. Open up a terminal and run the following commands:
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
Copy to clipboardErrorCopied
Now that we've added the graphics-drivers/ppa
, we can install the latest NVIDIA drivers via the Additional Drivers
section in the Software & Updates
GUI application.
Use your application launcher to open the Software & Updates
program, then select the Additional Drivers
section. After a few seconds, you should see several selectable options of the format Using NVIDIA driver metapackage from nvidia-driver-XXX
. Select the drivers with the highest number replacing "XXX". At the time of this writing (July 2019), that's nvidia-driver-430
, but you should feel confident choosing a more recent driver if you prefer.
Once you've made your selection, click Apply Changes
. You should now restart your computer for the changes to take effect.
Once your computer reboots, open a terminal and type nvidia-smi
. You should see an ASCII table generated with a row for each NVIDIA graphics card you have attached to your machine. Once you see output similar to this, the NVIDIA drivers have been installed correctly, and you can move on to the next step.
Mon Jul 29 18:19:40 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.26 Driver Version: 430.26 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1080 Off | 00000000:01:00.0 On | N/A |
| 0% 49C P0 46W / 180W | 1119MiB / 8116MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1689 G /usr/lib/xorg/Xorg 40MiB |
| 0 1736 G /usr/bin/gnome-shell 50MiB |
+-----------------------------------------------------------------------------+
Copy to clipboardErrorCopied
Step 3: Installing nvidia-docker2
The last dependency, after you've installed the NVIDIA drivers, is to install nvidia-docker2
. We recommend following the install instructions in the README of the nvidia-docker2
package itself as the install method changes between releases.
We do not yet support the very recent nvidia-container-toolkit
, so be sure you install nvidia-docker2
. We have plans to upgrade to the newer NVIDIA Docker Toolkit soon! In the meantime, it is perfectly fine to have both nvidia-docker2
and nvidia-container-toolkit
installed on the same machine, so you should feel confident in installing nvidia-docker2
even if you already have the more recent nvidia-container-toolkit
installed.
Once you've installed nvidia-docker2
, run the following command to test that Docker containers can access your graphics card device using the nvidia
runtime:
docker run --runtime nvidia nvidia/cuda:9.0-base nvidia-smi
Copy to clipboardErrorCopied
If all goes well you should see an ASCII table similar to the one that you saw before when you ran nvidia-smi
manually after installing the NVIDIA drivers.
Step 4: Running Local GPU Models in Runway
If the following steps went well, you should be ready to run GPU models locally inside of Runway. You can choose the "LOCAL" run location and the "GPU" hardware type of any model by selecting the "Advanced Options" link in the workspace view of Runway. You will then be prompted to download the model, and once that's done, you can run the model locally on your own GPU!
You can select Local GPU by clicking the Advanced Options
link.
From there, select LOCAL
as the Run Option
and GPU
as the Hardware
option.
If you don't already have the GPU version of the model installed, you will be prompted to download and install it next.
Once the download is complete, you can run the model locally on your own GPU hardware!
Whenever a model is running on your local GPU, you can use the nvidia-smi
command to inspect the GPU memory allocation and processor utilization of each model process.