You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
P2P-NET is a general-purpose deep neural network which learns geometric transformations between point-based shape representations from two domains, e.g., meso-skeletons and surfaces, partial and complete scans, etc.
The architecture of the P2P-NET is that of a bi-directional point displacement network, which transforms a source point set to a target point set with the same cardinality, and vice versa, by applying point-wise displacement vectors learned from data.
P2P-NET is trained on paired shapes from the source and target domains, but without relying on point-to-point correspondences between the source and target point sets... [more in the paper].
Prerequisites
Linux (tested under Ubuntu 16.04 )
Python (tested under 2.7)
TensorFlow (tested under 1.3.0-GPU )
numpy, h5py
The code is built on the top of
PointNET++. Before run the code, please compile the customized TensorFlow operators of PointNet++ under the folder "pointnet_plusplus/tf_ops".
If you find our work useful in your research, please consider citing:
@article {yin2018p2pnet,
author = {Kangxue Yin and Hui Huang and Daniel Cohen-Or and Hao Zhang},
title = {P2P-NET: Bidirectional Point Displacement Net for Shape Transform},
journal = {ACM Transactions on Graphics(Special Issue of SIGGRAPH)},
volume = {37},
number = {4},
pages = {152:1--152:13},
year = {2018}
}
Acknowledgments
The code is built on the top of
PointNET++.
Thanks for the precedent contribution.
About
P2P-NET: Bidirectional Point Displacement Net for Shape Transform