Exporters From Japan
Wholesale exporters from Japan   Company Established 1983
CARVIEW
Select Language

Pipeline

In the training phase, we begin by synthesizing a degraded version $y$, of a clean image $x$. Our degradation synthesis pipeline also creates a restoration prompt ${c}_r$ , which contains numeric parameters that reflects the intensity of the degradation introduced. Then, we inject the synthetic restoration prompt into a ControlNet adaptor, which uses our proposed modulation fusion blocks ($\gamma$, $\beta$) to connect with the frozen backbone driven by the semantic prompt ${c}_s$.

During test time, the users can employ the SPIRE framework as either a blind restoration model with restoration prompt $\textit{``Remove all degradation''}$ and empty semantic prompt $\varnothing$, or manually adjust the restoration ${c}_r$ and semantic prompts ${c}_s$ to obtain what they ask for.

Restoration Prompting

Test-time semantic prompting. Our framework restores degraded images guided by flexible semantic prompts, while unrelated background elements and global tones remain aligned with the degraded input conditioning.



Prompt space walking visualization for the restoration prompt. Our method can decouple the restoration direction and strength via only natural language prompting.

Semantic Prompting

Test-time semantic prompting. Our framework restores degraded images guided by flexible semantic prompts, while unrelated background elements and global tones remain aligned with the degraded input conditioning.

Baseline Comparison

BibTeX

@article{qi2023tip,
        title={TIP: Text-Driven Image Processing with Semantic and Restoration Instructions}, 
        author={Chenyang Qi and Zhengzhong Tu and Keren Ye and Mauricio Delbracio and Peyman Milanfar and Qifeng Chen and Hossein Talebi},
        journal={arXiv:2312.11595},
        year={2023},
        
}
  

Acknowledgement

We are grateful to Kelvin Chan and David Salesin for their valuable feedback. We also extend our gratitude to Shlomi Fruchter, Kevin Murphy, Mohammad Babaeizadeh, and Han Zhang for their instrumental contributions in facilitating the initial implementation of the latent diffusion model.