You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can find detailed usage instructions for training your own models and using pretrained models below.
If you find our code or paper useful, please consider citing
@inproceedings{BaoruiTowards,
title = {Towards Better Gradient Consistency for Neural Signed Distance Functions via Level Set Alignment},
author = {Baorui Ma and Junsheng Zhou and Yu-Shen Liu and Zhizhong Han},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2023}
}
First you have to make sure that you have all dependencies in place.
The simplest way to do so, is to use anaconda.
You can create an anaconda environment called tf using
conda env create -f tf.yaml
conda activate tf
ToDo
In different datasets or your own data, because of the variation in point cloud density, this '0.25' parameter has a very strong influence on the final result, which controls the distance between the query points and the point cloud. So if you want to get better results, you should adjust this parameter. We give '0.25' here as a reference value, and this value can be used for most object-level reconstructions. For the scene dataset, we will later publish the reference values for the hyperparameter settings for the scene dataset.
About
[CVPR'2023]: Towards Better Gradient Consistency for Neural Signed Distance Functions via Level Set Alignment