You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Champion Solutions repository for Perception Test challenges in ICCV2023 workshop.
Introduction
We achieves the best performance in Temporal Sound Localisation task and runner-up in Temporal Action Localisation task. In this repository, we provide the pretrained video&audio features, checkpoints, and codes for feature extraction, training, and inference.
Get Started
Please refer to INSTALL.md to install the prerequisite packages.
Feature Extraction
TAL
For the video features, we use the UMT large model pre-trained on Something Something-V2 and the VideoMAE model pre-trained on Ego4D-Verb dataset. The weights of Ego4d can be found here. These two features are concatenated before putting into the ActionFormer model during both training and inference stages.
For the audio features, we use the BEATs model as feature extractor and adopt its iter3+ checkpoints pre-trained on the AudioSet-2M dataset. we provide scripts to extract BEATs and CAV-MAE (although not used), please use python audio_feat_extract.py to extract audio features.
TSL
For the video feature, we use the UMT large model pre-trained on Something Something-V2 and fine-tuned on the perception test temporal action localisation training set.
For the audio features, we use the BEATs model as feature extractor and adopt its iter3+ checkpoints pre-trained on the AudioSet-2M dataset. we provide scripts to extract BEATs and CAV-MAE (although not used), please use python audio_feat_extract.py to extract audio features.