You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Oct 31, 2023. It is now read-only.
WyPR is a Weakly-supervised framework for Point cloud Recognition, requiring only scene-level class tags as supervision.
WyPR jointly addresses three core 3D recognition tasks: point-level semantic segmentation, 3D proposal generation, and 3D object detection,
coupling their predictions through self and cross-task consistency losses.
In conjunction with standard multiple-instance learning (MIL) and self-training objectives,
WyPR can detect and segment objects in point cloud without access to any spatial labels at training time.
WyPR is evaluated on ScanNet and S3DIS datasets, and outperforms prior state of the art weakly-supervised works by a great margin.
Please follow the wypr/dataset/*/README.md for downloading and pre-processing datasets.
Runnning
Please check docs/RUNNING.md for detailed running instructions and pre-trained models.
Citation
If you find our work useful in your research, please consider citing:
@inproceedings{ren2021wypr,
title = {3D Spatial Recognition without Spatially Labeled 3D},
author = {Ren, Zhongzheng and Misra, Ishan and Schwing, Alexander G. and Girdhar, Rohit},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2021}
}