You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Abstract: Image super-resolution (SR) methods typically model degradation to improve reconstruction accuracy in complex and unknown degradation scenarios. However, extracting degradation information from low-resolution images is challenging, which limits the model performance. To boost image SR performance, one feasible approach is to introduce additional priors. Inspired by advancements in multi-modal methods and text prompt image processing, we introduce text prompts to image SR to provide degradation priors. Specifically, we first design a text-image generation pipeline to integrate text into the SR dataset through the text degradation representation and degradation model. The text representation applies a discretization manner based on the binning method to describe the degradation abstractly. This method maintains the flexibility of the text and is user-friendly. Meanwhile, we propose the PromptSR to realize the text prompt SR. The PromptSR utilizes the pre-trained language model (e.g., T5 or CLIP) to enhance restoration. We train the model on the generated text-image dataset. Extensive experiments indicate that introducing text prompts into SR, yields excellent results on both synthetic and real-world images.
We achieve state-of-the-art performance on synthetic and real-world dataset. Detailed results can be found in the paper.
Evaluation on Synthetic Datasets (click to expand)
quantitative comparison
visual comparison
Evaluation on Real-World Datasets (click to expand)
quantitative comparison
visual comparison
📎 Citation
If you find the code helpful in your research or work, please cite the following paper(s).
@article{chen2023image,
title={Image Super-Resolution with Text Prompt Diffusion},
author={Chen, Zheng and Zhang, Yulun and Gu, Jinjin and Yuan, Xin and Kong, Linghe and Chen, Guihai and Yang, Xiaokang},
journal={arXiv preprint arXiv:2303.06373},
year={2023}
}