You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
UniRef++: Segment Every Reference Object in Spatial and Temporal Spaces
Official implementation of UniRef++, an extended version of ICCV2023 UniRef.
Highlights
UniRef/UniRef++ is a unified model for four object segmentation tasks, namely referring image segmentation (RIS), few-shot segmentation (FSS), referring video object segmentation (RVOS) and video object segmentation (VOS).
At the core of UniRef++ is the UniFusion module for injecting various reference information into network. And we implement it using flash attention with high efficiency.
UniFusion could play as the plug-in component for foundation models like SAM.
Schedule
Add Training Guide
Add Evaluation Guide
Add Data Preparation
Release Model Checkpoints
Release Code
Results
video_demo.mp4
Referring Image Segmentation
Referring Video Object Segmentation
Video Object Segmentation
Zero-shot Video Segmentation & Few-shot Image Segmentation
If you find this project useful in your research, please consider cite:
@article{wu2023uniref++,
title={UniRef++: Segment Every Reference Object in Spatial and Temporal Spaces},
author={Wu, Jiannan and Jiang, Yi and Yan, Bin and Lu, Huchuan and Yuan, Zehuan and Luo, Ping},
journal={arXiv preprint arXiv:2312.15715},
year={2023}
}
@inproceedings{wu2023uniref,
title={Segment Every Reference Object in Spatial and Temporal Spaces},
author={Wu, Jiannan and Jiang, Yi and Yan, Bin and Lu, Huchuan and Yuan, Zehuan and Luo, Ping},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={2538--2550},
year={2023}
}