You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We provide a script for evaluating temporal article grounding under /evaluation. You can use it to evaluate models on the (seen) validation set.
For example to compute the article grounding mAP on the provided sample predictions run:
The evaluation server is available on Eval AI.
You can use it to evaluate on the test sets (seen and unseen) as well as an unseen validation set.
For submission instructions see here.
License
The HT-Step annotations are released under the CC-BY-NC 4.0 license. See LICENSE for additional details.
Portions of the project are available under separate license terms: The evaluation code is licensed under the MIT license.
Citation
If this work is helpful in your research, please cite the following papers:
@inproceedings{Afouras_2023_htstep,
author={Triantafyllos Afouras and Effrosyni Mavroudi and Tushar Nagarajan and Huiyu Wang and Lorenzo Torresani},
title={{HT}-Step: Aligning Instructional Articles with How-To Videos},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=vv3cocNsEK}
}
@InProceedings{Mavroudi_2023_vina,
author = {Mavroudi, Effrosyni and Afouras, Triantafyllos and Torresani, Lorenzo},
title = {Learning to Ground Instructional Articles in Videos through Narrations},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {15201-15213}
}
About
HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos