You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This repository contains learning code that implements
event representation learning as described in Gehrig et al. ICCV'19. The paper can be found here
If you use this code in an academic context, please cite the following work:
Daniel Gehrig, Antonio Loquercio, Konstantinos G. Derpanis, Davide Scaramuzza, "End-to-End Learning of Representations
for Asynchronous Event-Based Data", The International Conference on Computer Vision (ICCV), 2019
@InProceedings{Gehrig_2019_ICCV,
author = {Daniel Gehrig and Antonio Loquercio and Konstantinos G. Derpanis and Davide Scaramuzza},
title = {End-to-End Learning of Representations for Asynchronous Event-Based Data},
booktitle = {Int. Conf. Comput. Vis. (ICCV)},
month = {October},
year = {2019}
}
Requirements
Python 3.7
virtualenv
cuda 10
Dependencies
Create a virtual environment with python3.7 and activate it
Here, validation_dataset and training_dataset should point to the folders where the training and validation set are stored.
log_dir controls logging and device controls on which device you want to train. Checkpoints and models with lowest validation loss will be saved in the root folder of log_dir.
--save_every_n_epochs save a checkpoint every n epochs.
--batch_size batch size for training
Visualization
Training can be visualized by calling tensorboard
tensorboard --logdir log/temp
Training and validation losses as well as classification accuracies are plotted. In addition, the learnt representations are visualized. The training and validation curves should look something like this:
Testing
Once trained, the models can be tested by calling the following script: