You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We replace the SPAB module with the proposed SConvLB module and incorporate
ConvLoRA layers into both the pixel shuffle block and its preceding convolutional layer. Spatial Affinity Distillation Loss is calculated
between each feature map.
🚀 Updates
[2025.04.21] ✅ Upload our model on Hugging Face 🤗.
[2025.04.15] 🎉 Our paper is accepted to CVPR 2025 Workshop!
The evaluation environments adopted by us is recorded in the requirements.txt. After you built your own basic Python (Python = 3.9 in our setting) setup via either virtual environment or anaconda, please try to keep similar to it via:
CUDA_VISIBLE_DEVICES=0 python test_demo.py --data_dir [path to your data dir] --save_dir [path to your save dir] --model_id 23
Be sure the change the directories --data_dir and --save_dir.
🥰 Citation
If our work is useful to you, please use the following BibTeX for citation.
@inproceedings{Chai2025DistillationSupervisedCL,
title={Distillation-Supervised Convolutional Low-Rank Adaptation for Efficient Image Super-Resolution},
author={Xinning Chai and Yao Zhang and Yuxuan Zhang and Zhengxue Cheng and Yingsheng Qin and Yucai Yang and Li Song},
year={2025},
url={https://api.semanticscholar.org/CorpusID:277787382}
}
📜 License and Acknowledgement
This code repository is release under MIT License.