You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Virtual Market Dataset with 500 ID x 24 images: VirtualMarket
TF-record data preparation steps
You can skip this data preparation procedure if directly using the tf-record data files.
cd datasets
./run_convert_market.sh to download and convert the original images, poses, attributes, segmentations
./run_convert_DF.sh to download and convert the original images, poses
[Optional] ./run_convert_RCV.sh to convert the original images and pose coordinates, i.e. (row, column, visibility) (e.g. from OpenPose or MaskRCNN), which can be useful for other datasets.
Note: we also provide the convert code for Market-1501 Attribute and Market-1501 Segmentation results from PSPNet. These extra info. are provided for further research. In our experiments, pose mask are ontained from pose key-points (see _getPoseMask function in convert .py files).
Training steps
Download the tf-record training data.
Modify the log_dir and log_dir_pretrain in the run_market_train.sh/run_DF_train.sh scripts.
run run_market_train.sh/run_DF_train.sh
Note: we use a triplet instead of pair real/fake for adversarial training to keep training more stable.
Testing steps
Download the pretrained models and tf-record testing data.
Modify the log_dir and log_dir_pretrain in the run_market_test.sh/run_DF_test.sh scripts.
run run_market_test.sh/run_DF_test.sh
Fg/Bg/Pose sampling on Market-1501
Appearance sampling on DeepFashion dataset
Pose sampling on DeepFashion dataset
Pose interpolation between real images
Between same person:
Between different persons:
Pose guided person image generation
Citation
@inproceedings{ma2018disentangled,
title={Disentangled Person Image Generation},
author={Ma, Liqian and Sun, Qianru and Georgoulis, Stamatios and Van Gool, Luc and Schiele, Bernt and Fritz, Mario},
booktitle={{IEEE} Conference on Computer Vision and Pattern Recognition},
year={2018}
}