Runway's content moderation filter is an automatic process that scans your inputs and the resulting outputs for anything that is not allowed under our Trust & Safety standards.
Specific categories of automatically blocked content include:
- Nudity, obscenity, or overly provocative content
- Gore, blood, or other viscera
- Sexually explicit content
- Offensive subject matter
We are unable to whitelist specific accounts or subject matters that are being content moderated, regardless of the intent of your input or final project, as content moderation is automatic. We receive lots of inquiries asking for us to "temporarily disable" the content moderation system for an account, project, or topic — unfortunately, we cannot complete these requests, so please refrain from trying to make a case to a support agent.
For entire blocks of content that are being inappropriately moderated (that is, anything repeatedly being declined that is not an explicit item on this list), feel free to send in a support ticket with as much detail as you can provide as to what content is being inappropriately moderated so we can make system-wide adjustments. Specific content blocks we are especially interested in improving moderation include:
- Model bias or stereotyping
- False positives (an input was moderated when it should not have been moderated)
- False negatives (an input was not moderated when it should have been moderated)
We will likely not be able to immediately enact and make content moderation changes, but rather will use this information for the long-term improvement of our content moderation systems.