You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is the test code reference implementation of Deep Iterative Frame Interpolation for Full-frame Video Stabilization [1], using PyTorch.
This work proposes a full-frame video stabilization method via frame interpolation techniques, making use of a self-supervised deep learning approach.
Should you make use of our work, please cite our paper [1].
You may require to setup the correlation package for computing the cost volume module in PWC-Net.
If required, please follow the instructions in vt-vl-lab/pwc-net.
Usage
You can run python run_seq2.py --cuda --n_iter 3 --skip 2 to obtain example results on a sample given in the data folder, which will be saved in the output folder.
By default, our experiments were done with 3 iterations and skip parameter of 2.
This can be customized by adjusting the --n_iter and --skip options.
We also provide code for making .avi videos from output frames, and a reference code for quality metrics.
Supplementary video
Please refer to the supplementary video provided below (click thumbnail):
References
[1] @article{Choi_TOG20,
author = {Choi, Jinsoo and Kweon, In So},
title = {Deep Iterative Frame Interpolation for Full-Frame Video Stabilization},
year = {2020},
issue_date = {February 2020},
publisher = {Association for Computing Machinery},
volume = {39},
number = {1},
issn = {0730-0301},
url = {https://doi.org/10.1145/3363550},
journal = {ACM Transactions on Graphics},
articleno = {4},
numpages = {9},
}
License
The provided implementation is strictly for academic purposes only.
Should you be interested in using our technology for any commercial use, please contact us.
About
TOG20/SIGGRAPH Asia19 "Deep Iterative Frame Interpolation for Full-frame Video Stabilization" - PyTorch