| CARVIEW |
ConceptExpress: Harnessing Diffusion Models for Single-image Unsupervised Concept Extraction
Abstract
While personalized text-to-image generation has enabled the learning of a single concept from multiple images, a more practical yet challenging scenario involves learning multiple concepts within a single image. However, existing works tackling this scenario heavily rely on extensive human annotations. In this paper, we introduce a novel task named Unsupervised Concept Extraction (UCE) that considers a fully unsupervised setting without any human knowledge of the concepts. Given an image that contains multiple concepts, the task aims to extract and recreate individual concepts solely relying on the existing knowledge from pretrained diffusion models. To address this problem, we present ConceptExpress that tackles UCE by unleashing the inherent capabilities of pretrained diffusion models in two aspects. Specifically, a concept localization approach automatically locates and disentangles salient concepts by leveraging spatial correspondence provided by diffusion self-attention; and based on the lookup association between a concept and a conceptual token, a concept-wise optimization process learns discriminative tokens that represent each individual concept. Finally, we establish an evaluation protocol tailored for the UCE task. Extensive experiments show the effectiveness of ConceptExpress, demonstrating it to be a promising solution to UCE.
Method
ConceptExpress can disentangle each concept in the compositional scene and learn discriminative conceptual tokens that represent each individual concept.
ConceptExpress presents two major innovations:
- For concept disentanglement, we propose a concept localization approach that automatically locates salient concepts within the image. This approach involves clustering spatial points on the self-attention map, building upon the observation that stable diffusion has learned good unsupervised spatial correspondence in the self-attention layers.
- For conceptual token learning, we employ concept-wise masked denoising optimization by reconstructing the located concept. This optimization is based on a token lookup table that associates each located concept with its corresponding conceptual token.
See more method details in our paper!
Results
Unsupervised concept extraction
Bas†: Break-a-Scene adpated in the unsupervised setting by leveraging the instance masks identified by our method as the ground-truth segmentation masks.
Text-guided generation
ConceptExpress is also capable of text-guided generation:
Source image
Extracted concept:
Extracted concept:
Extracted concept:
" and with a city in the background"
" with sunflowers around it"
" on top of a dirt road"
" and with a beautiful sunset"
" and and in a luxurious living room"
" and and in the snow"
" and with a mountain in the background"
" in the snow"
" with the Eiffel Tower in the background"
" in the jungle"
" and and on a cobblestone street"
" at the beach"
Source image
Extracted concept:
Extracted concept:
" and with a wheat field"
" in the jungle"
" in the snow"
" with a sunset"
" with a sunset"
" in the snow"
" in the jungle"
" and among skyscrapers"
" and in a movie theater"
" and floating on top of water"
" and on a cobblestone street"
Source image
Extracted concept:
Extracted concept:
" and "
" among the skyscrapers in New York city"
" floating on top of water"
" and with a beautiful sunset"
" with a tree and autumn leaves in the background"
" in the snow"
Source image
Extracted concept:
Extracted concept:
" with a beautiful sunset"
" floating on top of water"
" with a city in the background"
" and "
" and with a wheat field in the background"
" on top of pink fabric"
BibTeX
If you find this project useful for your research, please cite the following:
@InProceedings{hao2024conceptexpress,
title={Concept{E}xpress: Harnessing Diffusion Models for Single-image Unsupervised Concept Extraction},
author={Shaozhe Hao and Kai Han and Zhengyao Lv and Shihao Zhao and Kwan-Yee~K. Wong},
booktitle={ECCV},
year={2024},
}
This page was adapted from this source code.