You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We evaluate our models on COCO-UniFS
benchmark. This benchmark is built upon several existing datasets, including MSCOCO and MISC.
The COCO-UniFS benchmark provides dense annotations for four fundamental few-shot computer vision tasks: object detection,
instance segmentation, pose estimation, and object counting. The annotations for object detection and
instance segmentation are directly taken from the MSCOCO dataset, which
provides bounding box and per-instance segmentation mask annotations for 80
object categories. For pose estimation, we extend the MSCOCO dataset by
adding instance-level keypoint annotations for 34 object categories from the
MISC dataset. The MISC dataset was originally designed for multi-instance
semantic correspondence, and we adapted it to fit the few-shot pose estimation
task.
The dataset split follows DeFRCN.
Unzip the downloaded COCO-UniFS data-source to datasets and put it into your project directory:
The baseline DeFRCN may tend to incorrectly recognize positive object as background (middle two rows) due to the biased classification. This problem is greatly alleviated using our proposed method (DCFS).
UniFS is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please contact Mr. Sheng Jin (jinsheng13[at]foxmail[dot]com). We will send the detail agreement to you.