You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our diffusion code structure is based on the original implementation of DDPM. Increasing the size of the U-Net may lead to better results.
About training iteration. The training with 5000 iterations has converged quite well. We recommend training for 10,000 iterations to achieve better performance, and you can select the best-performing training iterations.
We test code on one RTX 3090 GPU. The training time is about 1-2 days.
pythontrain.py#train from scratch, you can change setting in modelConfig pythontrain.py--pretrained_pathckpt/lol.ptpythontest.py--pretrained_pathckpt/lol.pt
Mask CLE Diffusion
Mask CLE Diffusion finetunes lol checkpoint. In our experiments, lol checkpoint is better than mit-adobe-5K checkpoint.
We show some inference cases in 'data/Mask_CLE_cases'. Welcome to use your cases to test the performance.
pythonmask_generation.py#generate masks for trainingpythontrain_mask.py--pretrained_pathckpt/lol.pt#finetune Mask CLE Diffusionpythontest_mask.py--pretrained_pathckpt/Mask_CLE.pt--input_pathdata/Mask_CLE_cases/opera.png--mask_pathdata/Mask_CLE_cases/opera_mask.png--data_nameopera
@inproceedings{yin2023cle,
title={CLE Diffusion: Controllable Light Enhancement Diffusion Model},
author={Yin, Yuyang and Xu, Dejia and Tan, Chuangchuang and Liu, Ping and Zhao, Yao and Wei, Yunchao},
booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
pages={8145--8156},
year={2023}
}
If you have any problems, please feel free to create a new issue or email me(yuyangyin@bjtu.edu.cn)..