You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Existing text-to-image diffusion models struggle to synthesize realistic images given dense captions, where each text prompt provides a detailed description for a specific image region.
To address this, we propose DenseDiffusion, a training-free method that adapts a pre-trained text-to-image model to handle such dense captions while offering control over the scene layout.
We first analyze the relationship between generated images' layouts and the pre-trained model's intermediate attention maps.
Next, we develop an attention modulation method that guides objects to appear in specific regions according to layout guidance.
Without requiring additional fine-tuning or datasets, we improve image generation performance given dense captions regarding both automatic and human evaluation scores.
In addition, we achieve similar-quality visual results with models specifically trained with layout conditions.
Method
Our goal is to improve the text-to-image model's ability to reflect textual and spatial conditions without fine-tuning.
We formally define our condition as a set of $N$ segments ${\lbrace(c_{n},m_{n})\rbrace}^{N}_{n=1}$, where each segment $(c_n,m_n)$ describes a single region.
Here $c_n$ is a non-overlapping part of the full-text caption $c$, and $m_n$ denotes a binary map representing each region. Given the input conditions, we modulate attention maps of all attention layers on the fly so that the object described by $c_n$ can be generated in the corresponding region $m_n$.
To maintain the pre-trained model's generation capacity, we design the modulation to consider original value range and each segment's area.
Adjust the full text. The default full text is automatically concatenated from each segment's text. The default one works well, but refineing the full text will further improve the result.
Check the generated images, and tune the hyperparameters if needed.
wc : The degree of attention modulation at cross-attention layers.
ws : The degree of attention modulation at self-attention layers.
Benchmark
We share the benchmark used in our model development and evaluation here.
The code for preprocessing segment conditions is in here.
BibTeX
@inproceedings{densediffusion,
title={Dense Text-to-Image Generation with Attention Modulation},
author={Kim, Yunji and Lee, Jiyoung and Kim, Jin-Hwa and Ha, Jung-Woo and Zhu, Jun-Yan},
year={2023},
booktitle = {ICCV}
}
Acknowledgment
The demo was developed referencing this source code. Thanks for the inspiring work! 🙏
About
Official Pytorch Implementation of DenseDiffusion (ICCV 2023)