You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A tensorflow implementation of Christian et al's "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network" paper.
( See : https://arxiv.org/abs/1609.04802 )
This implementation is quite different from original paper. The differences are as followings:
MNIST data set is used for convenience. ( It'll be straight-forward applying this scheme to large image data set like Urban 100 )
I've completely replace MSE loss with GAN using tuple input for discriminator.( see training source code )
The existing CNN based super-resolution skill mainly use MSE loss and this makes super-resolved images look blurry.
If we replace MSE loss with gradients from GAN, we may prevent the blurry artifacts of the super-resolved images
and this is the key idea of this paper. I think this idea looks promising and my experiment result using MNIST data set looks good.
to train the network. You can see the result ckpt files and log files in the 'asset/train' directory.
Launch tensorboard --logdir asset/train/log to monitor training process.
Generating image
Execute
python generate.py
to generate sample image. The 'sample.png' file will be generated in the 'asset/train' directory.