You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Convolutional neural networks (CNNs) are highly successful for super-resolution (SR) but often require sophisticated architectures with heavy memory cost and computational overhead, significantly restricts their practical deployments on resource-limited devices. In this paper, we proposed a novel contrastive self-distillation (CSD) framework to simultaneously compress and accelerate various off-the-shelf SR models. In particular, a channel-splitting super-resolution network can first be constructed from a target teacher network as a compact student network. Then, we propose a novel contrastive loss to improve the quality of SR images and PSNR/SSIM via explicit knowledge transfer. Extensive experiments demonstrate that the proposed CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN.
Results
Citation
If you find the code helpful in you research or work, please cite as:
@misc{wang2021compact,
title={Towards Compact Single Image Super-Resolution via Contrastive Self-distillation},
author={Yanbo Wang and Shaohui Lin and Yanyun Qu and Haiyan Wu and Zhizhong Zhang and Yuan Xie and Angela Yao},
year={2021},
eprint={2105.11683},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Acknowledgements
This code is built on EDSR(PyTorch). For the training part of the MindSpore version we referred to DBPN-MindSpore, ModelZoo-RCAN and the official tutorial. We thank the authors for sharing their codes.
About
Towards Compact Single Image Super-Resolution via Contrastive Self-distillation, IJCAI21