You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models 🔥
DiffBlender successfully synthesizes complex combinations of input modalities. It enables flexible manipulation of conditions, providing the customized generation aligned with user preferences.
We designed its structure to intuitively extend to additional modalities while achieving a low training cost through a partial update of hypernetworks.
Download DiffBlender model checkpoint from this Huggingface model, and place it under ./diffblender_checkpoints/.
Also, prepare the SD model from this link (we used CompVis/sd-v1-4.ckpt).
Results will be saved under ./inference/{SAVE_NAME}/, in the format as {conditions + generated image}.
BibTeX
@article{kim2023diffblender,
title={DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models},
author={Kim, Sungnyun and Lee, Junsoo and Hong, Kibeom and Kim, Daesik and Ahn, Namhyuk},
journal={arXiv preprint arXiv:2305.15194},
year={2023}
}
About
DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models