MSPNet Official Implementation of "MSPNet: Mutil-scale pooling learning for camouflaged instance segmentation"
Chen Li
Contact: lichen_email@126.com
The code is tested on CUDA 11.1 and pytorch 1.9.0, change the versions below to your desired ones.
git clone https://github.com/another-u/MSPNet-main.git
cd MSPNet-main
conda create -n MSPNet python=3.8 -y
conda activate MSPNet
conda install pytorch==1.9.0 torchvision cudatoolkit=11.1 -c pytorch -c nvidia -y
python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.9/index.html
python setup.py build develop- generate coco annotation files, you may refer to the tutorial of mmdetection for some help
- change the path of the datasets as well as annotations in
adet/data/datasets/cis.py, please refer to the docs of detectron2 for more help
# adet/data/datasets/cis.py
# change the paths
DATASET_ROOT = 'Dataset_path'
ANN_ROOT = os.path.join(DATASET_ROOT, 'annotations')
TRAIN_PATH = os.path.join(DATASET_ROOT, 'Train/Image')
TEST_PATH = os.path.join(DATASET_ROOT, 'Test/Image')
TRAIN_JSON = os.path.join(ANN_ROOT, 'train_instance.json')
TEST_JSON = os.path.join(ANN_ROOT, 'test2026.json')
NC4K_ROOT = 'NC4K'
NC4K_PATH = os.path.join(NC4K_ROOT, 'Imgs')
NC4K_JSON = os.path.join(NC4K_ROOT, 'nc4k_test.json')Model weights: P2T Weights.
The visual results are achieved by our MSPNet with P2T_tiny trained on the COD10K training set.
- Results on the COD10K test set: It will be updated afterwards.
- Results on the NC4K test set: It will be updated afterwards.
python tools/train_net.py --config-file configs/CIS_P2T.yaml --num-gpus 1 \
OUTPUT_DIR {PATH_TO_OUTPUT_DIR}Please replace {PATH_TO_OUTPUT_DIR} to your own output dir
python tools/train_net.py --config-file configs/CIS_P2T.yaml --eval-only \
MODEL.WEIGHTS {PATH_TO_PRE_TRAINED_WEIGHTS}Please replace {PATH_TO_PRE_TRAINED_WEIGHTS} to the pre-trained weights
python demo/demo.py --config-file configs/CIS_P2T.yaml \
--input {PATH_TO_THE_IMG_DIR_OR_FIRE} \
--output {PATH_TO_SAVE_DIR_OR_IMAGE_FILE} \
--opts MODEL.WEIGHTS {PATH_TO_PRE_TRAINED_WEIGHTS}{PATH_TO_THE_IMG_DIR_OR_FIRE}: you can put image dir or image paths here{PATH_TO_SAVE_DIR_OR_IMAGE_FILE}: the place where the visualizations will be saved{PATH_TO_PRE_TRAINED_WEIGHTS}: please put the pre-trained weights here
If this helps you, please cite this work (MSPNet):
@article{li2024multi,
title={Multi-scale pooling learning for camouflaged instance segmentation},
author={Li, Chen and Jiao, Ge and Yue, Guowen and He, Rong and Huang, Jiayu},
journal={Applied Intelligence},
pages={1--15},
year={2024},
publisher={Springer}
}
