You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Download the datasets and put them in the same folder. To match the folder name in the dataset mappers, you'd better not change the folder names, its structure may be:
Download the pre-trained weights on USC12K: Baidu/ Google.
Visualization results ⚡
The visual results of SOTAs on USC12K test set.
Results on the Overall Scene test set: Baidu/ Google.
Usage
Train&Test
To train our USCNet on single GPU by following command,the trained models will be saved in savePath folder. You can modify datapath if you want to run your own datases.
Additional thanks to the following contributors to this project: Huaiyu Chen, Weiyi Cui, Mingxin Yang, Mengzhe Cui, Fei Liu, Yan Xu, Haopeng Fang, and Xiaokai Zhang from the School of Software Engineering, Huazhong University of Science and Technology.
Citation
If this helps you, please cite this work:
@inproceedings{zhou2025rethinking,
title={Rethinking Detecting Salient and Camouflaged Objects in Unconstrained Scenes},
author={Zhou, Zhangjun and Li, Yiping and Zhong, Chunlin and Huang, Jianuo and Pei, Jialun and Li, Hua and Tang, He},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={22372--22382},
year={2025}
}
}
About
[ICCV 2025] Rethinking Detecting Salient and Camouflaged Objects in Unconstrained Scenes