Introduction
Gen-4 References allow you to generate consistent characters in new images across endless lighting conditions, locations and treatments. All with just a single reference image of your characters.
References primarily supports character and location preservation, but we will be making updates in the future to support more reference types.
This article covers the best practices for image inputs, prompting and workflow approaches, and different results you can expect to receive from References.
References is very new, so many use cases have yet to be discovered. Do not be afraid to experiment and push the boundaries of what is outlined in this article.
Article highlights
- Tag your References to save them for future use
- Use up to three References for a single generation
- Use a high-quality photo of your subject with even, natural lighting
- Neutral expressions are recommended for a blank canvas
- For complex changes with more control, iterate elements individually, and use the resulting images as new references
Contents | Related links |
Workflow Overview
Begin by clicking Generate Image within your dashboard. Ensure that References are selected from the tab beneath the prompting area:
Drag and drop an image into the prompting canvas to add it as a Reference, or select an existing image from your Assets.
Saving and Managing References
By default, your uploaded image will be temporarily saved in the active session, meaning that it will clear if you reload the browser window.
If you'd like to have your image persist for future use, hover over the image, click tag to save, and press ENTER
to save it.
To remove temporary images, simply refresh the browser. Delete saved References by hovering over the image, clicking the Edit (pen) icon, and then selecting the Remove (trash) icon.
Using a Reference
Click the desired image from the References panel to load it for use. You can select up to three active References for a single generation.
Draft a prompt that describes how to use your Reference to create a new image. You can also use the @
symbol in your prompt to autocomplete a Reference name.
The best practices and recommendations outlined in the Gen-4 Image Prompting Guide apply to References, but this model supports conversational prompting as well. We recommend starting with a simple prompt and iterating by adding more details to work towards your desired output.
Generating Consistent Characters
Character Image Recommendations
For optimal results with fewer iterations, use character images that meet these criteria:
- Natural, even lighting
- Moderate quality
- Neutral subject expression
These recommendations provide a "blank canvas" that simplifies transformation, though References can still work well with stylized inputs.
In this article, we'll use the following image, named bryan
, as our initial reference:
With bryan
saved as our reference, we're ready to start prompting.
Single Reference Prompts
Using a single reference image relies on text prompts to describe your desired changes while preserving the character's identity.
This method is quick and versatile, perfect for exploring creative possibilities without needing additional images.
Text prompt | Output |
@bryan wearing a denim shirt with the sleeves cut off. he holds a single piece of hay in his mouth. medium length hair in the back. sitting on a plastic lawn chair. cinematic muted color palette. shallow depth of field. |
|
@bryan as a high elf in a castle. cinematic with professional color grading. muted color palette. shallow depth of field. pointed ears. flowing white hair. jeweled circlet. elaborate ethereal regal elven attire. |
|
|
💡 Tip: Try describing a subject's shoes or pants to consistently achieve a full-body shot.
Multi-Reference Prompts
Using multiple reference images gives you precise control over specific elements of your resulting generation.
This method produces more predictable results and is ideal when you have a clear vision that would be difficult to describe with text alone.
Text prompt | Additional References | Output |
show me bryan in the forest
|
||
bryan sits atop the floating boulder in weightlessrock
|
||
bryan in drivingedit
|
💡 Tip: When using an image that already contains a subject, cover the existing face with a black box in a photo editor before uploading. This prevents confusion between the original and new subjects.
Generating Consistent Scenes
Using a Reference image, you can also create consistent environment or "b-roll" by prompting for various angles, focal points, and objects. This helps establish a cohesive visual setting for your project while maintaining the atmosphere of your original Reference.
Below are some examples that use the elven character that we previously generated:
Text prompt | Output |
elfbryan profile view |
|
show a sword on the ground in elfbryan
|
|
show a dove resting atop the city wall in elfbryan . sky visible in background. |
Advanced Iteration
Using multiple References and prompting for completely different images creates much more opportunity for variation between results. Each new variable introduced expands creative possibilities, though results may sometimes differ from your initial vision.
For example, these are the different wardrobe variations we received in a single generation for a full body shot of elfbryan
:
Since we may prefer the wardrobe in the first image, we can continue working with it for new scenes.
To continue working with an output, hover over the result and select Reference for image:
We'll now save this reference as fullbodyelfbryan
to continue working with it in a new setting:
Text prompt | Reference 1 | Reference 2 | Output |
fullbodyelfbryan in flowerfield |
Iterating towards a final result with separate Reference pathways allows for more precision as you completely transform a shot, giving you control of the process each step of the way.
Below is an example of using two separate paths, character and scene, to iterate towards a final result:
Initial Reference | Iteration 1 | Iteration 2 | Combined paths | |
Character path | ||||
Scene path |
Next Steps
Now that you've created your reference images, take your creations even further.
Hover over any output image and click the camera icon to seamlessly load your creation to our latest video model. This allows you to bring your static images to life with natural movement and animation. See our Gen-4 guides for more details:
These resources will help you expand your creative possibilities beyond still imagery.
For more ways to use References, see the Advanced References Use Cases article.