You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently, text-guided image editing has achieved significant success. However, existing methods can only apply simple textures like wood or gold when changing the texture of an object. Complex textures such as cloud or fire pose a challenge. This limitation stems from that the target prompt needs to contain both the input image content and <texture>, restricting the texture representation. In this paper, we propose TextureDiffusion, a tuning-free image editing method applied to various texture transfer.
💻 Installation
It is recommended to run our code on a Nvidia GPU with a linux system. Currently, it requires around 13 GB GPU memory to run our method.
Clone the repo:
git clone https://github.com/THU-CVML/TextureDiffusion.git
cd TextureDiffusion
To install the required libraries, simply run the following command:
The notebook main.ipynb provides the editing samples.
Note: Within main.ipynb, you can set parameters such as attention_step, attention_layer, and resnet_step. We mainly conduct expriemnts on Stable Diffusion v1-4, while our method can generalize to other versions (like v1-5).
Dataset: In the quantitative experiments, the dataset is the editing type of changing material on PIE-Bench. We find that some text prompts do not meet the standards for changing material. For example, the source prompt is "the 2020 honda hrx is driving down the road" and the target prompt is "the 2020 honda hrx is driving down the road [full of flowers]".
So we modify prompt, and the modified file is mapping_file_modified.json. You need to use this file to replace mapping_file.json in PIE-Bench to perform quantitative experiments. In addition, we show the modified prompt in modified_prompt.txt.
If you find our repo helpful, please consider leaving a star or cite our paper :)
@article{su2025texturediffusion,
title={TextureDiffusion: Target Prompt Disentangled Editing for Various Texture Transfer},
author={Su, Zihan and Zhuang, Junhao and Yuan, Chun},
booktitle={IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
year={2025},
}
About
[ICASSP 2025 Oral] The official implementation of paper "TextureDiffusion: Target Prompt Disentangled Editing for Various Texture Transfer"