You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is the code for our NIPS 2016 paper on text- and location-controllable image synthesis using conditional GANs. Much of the code is adapted from reedscot/icml2016 and dcgan.torch.
Run one of the training scripts, e.g. ./scripts/train_cub_keypoints.sh
####How to generate samples:
./scripts/run_all_demos.sh.
html files will be generated with results like the following:
Moving the bird's position via bounding box:
Moving the bird's position via keypoints:
Birds text to image with ground-truth keypoints:
Birds text to image with generated keypoints:
Humans text to image with ground-truth keypoints:
Humans text to image with generated keypoints:
####Citation
If you find this useful, please cite our work as follows:
@inproceedings{reed2016learning,
title={Learning What and Where to Draw},
author={Scott Reed and Zeynep Akata and Santosh Mohan and Samuel Tenka and Bernt Schiele and Honglak Lee},
booktitle={Advances in Neural Information Processing Systems},
year={2016}
}