You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The original HoughNet applies voting only in the spatial domain - for object detection in still images.
We extended this idea to the temporal domain by developing a new method, which takes the difference of features from two frames, and
applies spatial and temporal voting using our “temporal voting module” to detect objects.
We showed the effectiveness of our method on ILSVRC2015 dataset.
Please download ILSVRC2015 DET and ILSVRC2015 VID datasets from here.
Next, please place the data as the following. Alternatively you could also create symlink.
This work was supported the Scientific and Technological Research Council of Turkey (TUBITAK) through the project titled "Object Detection in Videos with Deep Neural Networks" (grant number 117E054). The numerical calculations reported in this paper were partially performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources).
License
HoughNet-VID is released under the MIT License (refer to the LICENSE file for details).
Citation
If you find HoughNet-VID useful for your research, please cite our paper as follows.
N. Samet, S. Hicsonmez, E. Akbas, "HoughNet: Integrating near and long-range evidence for visual detection",
arXiv, 2021.
BibTeX entry:
@misc{HoughNet2021,
title={HoughNet: Integrating near and long-range evidence for visual detection},
author={Nermin Samet and Samet Hicsonmez and Emre Akbas},
year={2021},
}
About
[TPAMI-22] Bottom-up, voting based video object detection method