You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@article{FischerRAL2022ICRA2023,
title={How Many Events do You Need? Event-based Visual Place Recognition Using Sparse But Varying Pixels},
author={Tobias Fischer and Michael Milford},
journal={IEEE Robotics and Automation Letters},
volume={7},
number={4},
pages={12275--12282},
year={2022},
doi={10.1109/LRA.2022.3216226},
}
QCR-Event-VPR Dataset
The associated QCR-Event-VPR dataset can be found on Zenodo. The code can also handle data from our previous Brisbane-Event-VPR dataset.
Please download the dataset, and place the parquet files into the ./data/input_parquet_files directory.
If you want to work with the DAVIS conventional frames, please download the zip files, and extract them so that an image files is located in e.g. ./data/input_frames/bags_2021-08-19-08-25-42/frames/1629325542.939281225204.png.
For the Brisbane-Event-VPR dataset, place the nmea files into the ./data/gps_data directory.
Install dependencies
We recommend using conda (in particular, mamba, which can be installed via Miniforge3: