This is the official GitHub page for the paper (Link):
Eric Müller-Budack, Kader Pustu-Iren, Ralph Ewerth: "Geolocation Estimation of Photos using a Hierarchical Model and Scene Classification". In: European Conference on Computer Vision (ECCV), Munich, Springer, 2018, 575-592.
This branch contains:
- Meta information for the MP-16 training dataset and image urls as well as the Im2GPS and Im2GPS3k test datasets
- List of geographical cells for all partitionings: coarse, middle, fine
- Results for the reported approaches on Im2GPS and Im2GPS3k <approach_parameters.csv>
- A python script to download all necessary resources to run the scripts
downloader.py
- Inference script to reproduce the paper results
inference.py
The (list of) image files for training and testing can be found on the following links:
- MP-16: https://multimedia-commons.s3-website-us-west-2.amazonaws.com/
- MP-16 (direct image links): https://github.com/TIBHannover/GeoEstimation/releases/download/v1.0/mp16_urls.csv
- Im2GPS: https://graphics.cs.cmu.edu/projects/im2gps/
- Im2GPS-3k: https://github.com/lugiavn/revisiting-im2gps/
The scene labels and probabilities are extracted using the Places365 ResNet 152 model from: https://github.com/CSAILVision/places365
In order to generate the labels for the superordinate scene categories the Places365 hierarchy is used: https://places2.csail.mit.edu/download.html
All models were trained using TensorFlow (1.14)
- Baseline approach for middle partitioning: Link
- Multi-partitioning baseline approach: Link
- Multi-partitioning Individual Scenery Network for S_3 concept indoor: Link
- Multi-partitioning Individual Scenery Network for S_3 concept natural: Link
- Multi-partitioning Individual Scenery Network for S_3 concept urban: Link
- Either use the provided script using
python downloader.py
to get all necessary files or follow these instructions: - We provide a docker container to run the code:
docker build <PROJECT_FOLDER> -t <DOCKER_NAME> docker run \ --volume <PATH/TO/PROJECT/FOLDER>:/src \ --volume <PATH/TO/IMAGE/FILES>:/img \ -u $(id -u):$(id -g) -it <DOCKER_NAME> bash cd /src
Run the inference script by executing the following command with an image of your choice:
python inference.py -i <PATH/TO/IMG/FILE>
or for a list of images with e.g.:
python inference.py -i <PATH/TO/IMG/FILES/*.jpg>
You can choose one of the following models for geolocalization: Model=[base_L, base_M, ISN]. ISNs are the standard models.
python inference.py -i <PATH/TO/IMG/FILES/*.jpg> -m <MODEL>
In order to reproduce our paper results, download the images and provide the meta information file for Im2GPS or Im2GPS3k. Note, that the image filename has to correspond to the IMG_ID
in the meta information and run the following command:
python inference.py -i <PATH/TO/IMG/FILES/*.jpg> -m <MODEL> -l <PATH/TO/META/INFORMATION>
Additional FLAGS:
-s
enables the visualization of class activation maps
-c
executes the script on the CPU
Please checkout the branch pytorch and follow the instructions.
This work is published under the GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007. For details please check the LICENSE file in the repository.