You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Abstract:Diffusion Probabilistic Models (DPMs) have recently shown remarkable performance
in image generation tasks, which are capable of generating highly realistic images. When adopting
DPMs for image restoration tasks, the crucial aspect lies in how to integrate the conditional
information to guide the DPMs to generate accurate and natural output, which has been largely
overlooked in existing works.
In this paper, we present a unified conditional framework based on diffusion models for image
restoration. We leverage a lightweight UNet to predict initial guidance and the diffusion model to
learn the residual of the guidance. By carefully designing the basic module and integration module
for the diffusion model block, we integrate the guidance and other auxiliary conditional information
into every block of the diffusion model to achieve spatially-adaptive generation conditioning.
To handle high-resolution images, we propose a simple yet effective inter-step patch-splitting
strategy to produce arbitrary-resolution images without grid artifacts. We evaluate our conditional
framework on three challenging tasks: extreme low-light denoising, deblurring, and JPEG restoration,
demonstrating its significant improvements in perceptual quality and the generalization to
restoration tasks.
Network Architecture
Training
Coming soon.
Evaluation
SID
Download the denoising model and put it to the folder './experiments/sid/checkpoint'
Download the testing dataset and put it into the folder './dataset'
@article{zhang2023UCDIR,
author = {Zhang, Yi and Shi, Xiaoyu and Li, Dasong and Wang, Xiaogang and Wang, Jian and Li, Hongsheng},
title = {A Unified Conditional Framework for Diffusion-based Image Restoration},
journal = {arXiv preprint arXiv:2305.20049},
year = {2023},
}