You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note: When running the training script for the first time, it will take several hours to preprocess the data (~3.5 hours on my machine). Training on an RTX 2080 Ti GPU takes 35-40 minutes per epoch.
During training, the checkpoints will be saved in lightning_logs/ automatically. To monitor the training process:
We provide the pretrained HiVT-64 and HiVT-128 in checkpoints/. You can evaluate the pretrained models using the aforementioned evaluation command, or have a look at the training process via TensorBoard:
tensorboard --logdir checkpoints/
Results
Quantitative Results
For this repository, the expected performance on Argoverse 1.1 validation set is:
Models
minADE
minFDE
MR
HiVT-64
0.69
1.03
0.10
HiVT-128
0.66
0.97
0.09
Qualitative Results
Citation
If you found this repository useful, please consider citing our work:
@inproceedings{zhou2022hivt,
title={HiVT: Hierarchical Vector Transformer for Multi-Agent Motion Prediction},
author={Zhou, Zikang and Ye, Luyao and Wang, Jianping and Wu, Kui and Lu, Kejie},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
}