You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Functional Map of the Dataset can be downloaded from their website/repo.
You can create csvs similar to the ones in the csvs/ folder.
Map showing distribution of the fMoW dataset.
Preparation
Install PyTorch and download the fMoW dataset.
Self-Supervised Training
Similar to official implementation of MoCo-v2, this implementation only supports multi-gpu, DistributedDataParallel training, which is faster and simpler; single-gpu or DataParallel training is not supported.
To do self-supervised pre-training of a ResNet-50 model on fmow using our MoCo-v2+Geo+TP model in an 4-gpu machine, run:
Download the GeoImageNet - The instructions to download GeoImageNet set are given here. Using this repository, we can download in the order of 2M images together with their coordinates. In the paper, we use 540k images for the GeoImageNet. The download process should download the images into their representative class folder. We recommend parallelizing the download process for efficiency.
Clustering - Once, we download the GeoImageNet dataset, we can use a clustering algorithm to cluster the images using their geo-coordinates. In the paper, we use K-means clustering to cluster 540k images into 100 clusters, however, any clustering algorithm can be used. After K-means clustering, we need to create a csv file similar to ones in the ./csvs/ folder.
Perform Self-Supervised Learning - After downloading the GeoImageNet and clustering the images, we can perform self-supervised learning. To do it, you can execute the following command :
Linear Classification - After learning the representations with MoCo-v2-geo, we can train the linear layer to classify GeoImageNet images. With a pre-trained model, to train a supervised linear classifier on frozen features/weights in an 4-gpu machine, run:
We use Retina-Net implementation from this repository for object detection experiments on xView. We use PSANet implementation from this repository for semantic segmentation experiments on SpaceNet.
Citing
If you find our work useful, please consider citing:
@article{ayush2021geography,
title={Geography-Aware Self-Supervised Learning},
author={Ayush, Kumar and Uzkent, Burak and Meng, Chenlin and Tanmay, Kumar and Burke, Marshall and Lobell, David and Ermon, Stefano},
journal={ICCV},
year={2021}
}
About
Official Repository for ICCV 2021 Paper Titled as "Geography-Aware Self-Supervised Learning"