You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@article{ohnbar_importance_PR,
Author = {Eshed Ohn-Bar and Mohan M. Trivedi},
Title = {Are All Objects Equal? Deep Spatio-Temporal Importance Prediction in Driving Videos},
Journal = {Pattern Recognition},
Year = {2017}
}
@inproceedings{ohnbar_importance,
title={What Makes an On-road Object Important?},
author={E. Ohn-Bar and M. M. Trivedi},
booktitle={IEEE Intl. Conf. Pattern Recognition},
year={2016}
}
(best paper award finalist)
Thank you for your interest and I hope this study is useful to you in some way.
The code was available on my website for the last couple of years, but I've received requests to put it on github.
If any questions, please don't hesitate to contact me
main.m contains a complete classification/regression example, from formatting annotations,
attribute-based importance prediction, and results visualization.
In order to run it, you will need to set some dependencies in 'setup_globals.m'
To visualize and work with the data, you will also need to download KITTI raw benchmark. You can see the required raw sequences in 'kitti' folder. You also need the tracking training benchmark, placed in devkit_tracking. Both are used because there are some additional object annotations on the tracking benchmark which we incorporate. The data can be loaded using the provided mat files without the KITTI data, but for visualization and understanding the processing/filtering I recommend downloading the associated KITTI data.
About
Data and code for our papers on object importance estimation in driving videos