You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We present Zero-Painter , a novel training-free framework for layout-conditional text-to-image synthesis that facilitates the creation of detailed and controlled imagery from textual prompts. Our method utilizes object masks and individual descriptions, coupled with a global text prompt, to generate images with high fidelity. Zero-Painter employs a two-stage process involving our novel Prompt-Adjusted Cross-Attention (PACA) and Region-Grouped Cross-Attention (ReGCA) blocks, ensuring precise alignment of generated objects with textual prompts and mask shapes. Our extensive experiments demonstrate that Zero-Painter surpasses current state-of-the-art methods in preserving textual details and adhering to mask shapes.
🔥 News
[2024.06.6] ZeroPainter paper and code is released.
[2024.02.27] Paper is accepted to CVPR 2024.
⚒️ Installation
Install with pip:
pip3 install -r requirements.txt
💃 Inference: Generate images with Zero-Painter
Download models and put them in the models folder.
You can use the following script to perform inference on the given mask and prompts pair:
[{
"prompt": "Brown gift box beside red candle.",
"color_context_dict": {
"(244, 54, 32)": "Brown gift box",
"(54, 245, 32)": "red candle"
}
}]
Method
🎓 Citation
If you use our work in your research, please cite our publication:
@article{Zeropainter,
title={Zero-Painter: Training-Free Layout Control for Text-to-Image Synthesis},
url={https://arxiv.org/abs/2406.04032},
publisher={arXiv},
author={Ohanyan, Marianna and Manukyan, Hayk and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey},
year={2024}}