You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This respository includes a PyTorch implementation of CE2P that won the 1st places of single human parsing in the 2nd LIP Challenge. The M-CE2P used for multiple human parsing is provided in https://github.com/RanTaimu/M-CE2P.
Some parts of InPlace-ABN have a native CUDA implementation, which must be compiled with the following commands:
cd libs
sh build.sh
python build.py
The build.sh script assumes that the nvcc compiler is available in the current system search path.
The CUDA kernels are compiled for sm_50, sm_52 and sm_61 by default.
To change this (e.g. if you are using a Kepler GPU), please edit the CUDA_GENCODE variable in build.sh.
Dataset and pretrained model
Note that the left and right label should be swapped when the label file is flipped.
Please download imagenet pretrained resent-101 from baidu drive or Google drive, and put it into dataset folder.
Training and Evaluation
./run.sh
To evaluate the results, please download 'LIP_epoch_149.pth' from baidu drive or Google drive, and put into snapshots directory.
./run_evaluate.sh
The parsing result of the provided 'LIP_epoch_149.pth' is 53.88 without any bells and whistles,
If this code is helpful for your research, please cite the following paper:
@inproceedings{ruan2019devil,
title={Devil in the details: Towards accurate single and multiple human parsing},
author={Ruan, Tao and Liu, Ting and Huang, Zilong and Wei, Yunchao and Wei, Shikui and Zhao, Yao},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={33},
pages={4814--4821},
year={2019}
}