You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Accepted for publication at IGARSS-22, Kuala Lumpur, Malaysia.
Here, we provide the pytorch implementation of the paper: UAL: UNCHANGED AREA LOSS-FUNCTION FOR CHANGE DETECTION NETWORKS.
Our Method
Task Description
Given two images of the same scene acquired at different times, we are required to mark the changed
and unchanged areas. Moreover, as for the changed areas, we need to annotate their detailed semantic masks.
The change detection task in this competition can be decomposed into two sub-tasks:
binary segmentation of changed and unchanged areas.
semantic segmentation of changed areas.
Model
My Improvement
In this project,we propose a loss function named UAL-function (Unchanged Area Loss-function). UAL aims to establish the semantic label correspondence within unchanged regions. It is simple and effective for improving semantic segmentation and change detection with respect to the feature separability.
Reproduction
We also reproduct FC-Siam-conc and change the code to accomplish two sub-tasks.
# store the whole dataset and pretrained backbones
mkdir -p data/dataset ; mkdir -p data/pretrained_models ;
# store the trained models
mkdir -p outdir/models ;
# store predictions of validation set and testing set
mkdir -p outdir/masks/val/im1 ; mkdir -p outdir/masks/val/im2 ;
mkdir -p outdir/masks/test/im1 ; mkdir -p outdir/masks/test/im2 ;
├── data
├── dataset # download from the link above
│ ├── train # training set
| | ├── im1
| | └── ...
│ └── val # the final testing set (without labels)
|
└── pretrained_models
├── resnet18.pth
├── resnet34.pth
└── ...
Training
# Please refer to utils/options.py for more arguments
# If hardware supports, more backbones can be trained, such as resnet50, resnet101
CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --backbone "resnet18" --pretrained --model "fcn"
Testing
# Modify the backbones, models and checkpoint paths in L39-44 in test.py manually according to your saved models
# Or simply use our final trained models
CUDA_VISIBLE_DEVICES=0,1,2,3 python test.py```
About
1st place solution to the Satellite Remote Sensing Image Change Detection Challenge hosted by SenseTime