You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This repo contains TensorFlow implementations of following image and video super resolution models:
SRCNN — "Image Super-Resolution Using Deep Convolutional Networks" [arxiv]
ESPCN — "Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network" [arxiv]
VSRNet — "Video Super-Resolution With Convolutional Neural Networks" [ieee]
VESPCN — "Real-Time Video Super-Resolution with Spatio-Temporal Networks and Motion Compensation" [arxiv]
This repo is a part of GSoC project for super resolution filter in ffmpeg.
Model training
To train provided models you should prepare datasets first using generate_datasets.sh script. It will download several videos (from https://www.harmonicinc.com/4k-demo-footage-download/) to build video dataset for video models and DIV2K dataset (https://data.vision.ee.ethz.ch/cvl/DIV2K/) for image models. After that either of the train scripts for each model can be used to train them.
Model generation
To generate binary model files, that can be used in ffmpeg's sr filter, use generate_header_and_model.py script. It additionally produces header files (that are used for internal models in ffmpeg). To use this script specify at least what model to generate and path to the checkpoint files (that can be a folder with several checkpoints, in this case latest checkpoint will be used). For example, to generate model files for trained ESPCN model following command can be used: