You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hopenet is an accurate and easy to use head pose estimation network. Models have been trained on the 300W-LP dataset and have been tested on real data with good qualitative performance.
For details about the method and quantitative results please check the CVPR Workshop paper.
To use please install PyTorch and OpenCV (for video) - I believe that's all you need apart from usual libraries such as numpy. You need a GPU to run Hopenet (for now).
To test on a video using dlib face detections (center of head will be jumpy):
For more information on what alpha stands for please read the paper. First two models are for validating paper results, if used on real data we suggest using the last model as it is more robust to image quality and blur and gives good results on video.
Please open an issue if you have an problem.
Some very cool implementations of this work on other platforms by some cool people:
If you find Hopenet useful in your research please cite:
@InProceedings{Ruiz_2018_CVPR_Workshops,
author = {Ruiz, Nataniel and Chong, Eunji and Rehg, James M.},
title = {Fine-Grained Head Pose Estimation Without Keypoints},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}
Nataniel Ruiz, Eunji Chong, James M. Rehg
Georgia Institute of Technology
About
🔥🔥 Deep Learning Head Pose Estimation using PyTorch.