You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Point level graspness label are not included in the original dataset, and need additional generation. Make sure you have downloaded the orginal dataset from GraspNet. The generation code is in dataset/generate_graspness.py.
cd dataset
python generate_graspness.py --dataset_root /data3/graspnet --camera_type kinect
Simplify dataset
original dataset grasp_label files have redundant data, We can significantly save the memory cost. The code is in dataset/simplify_dataset.py
cd dataset
python simplify_dataset.py --dataset_root /data3/graspnet
Training and Testing
Training examples are shown in command_train.sh. --dataset_root, --camera and --log_dir should be specified according to your settings. You can use TensorBoard to visualize training process.
Testing examples are shown in command_test.sh, which contains inference and result evaluation. --dataset_root, --camera, --checkpoint_path and --dump_dir should be specified according to your settings. Set --collision_thresh to -1 for fast inference.
Results
Results "In repo" report the model performance of my results without collision detection.
Evaluation results on Kinect camera:
Seen
Similar
Novel
AP
AP0.8
AP0.4
AP
AP0.8
AP0.4
AP
AP0.8
AP0.4
In paper
61.19
71.46
56.04
47.39
56.78
40.43
19.01
23.73
10.60
In repo
61.83
73.28
54.14
51.13
62.53
41.57
19.94
24.90
11.02
Troubleshooting
If you meet the torch.floor error in MinkowskiEngine, you can simply solve it by changing the source code of MinkowskiEngine:
MinkowskiEngine/utils/quantization.py 262,from discrete_coordinates =_auto_floor(coordinates) to discrete_coordinates = coordinates