Skip to content

Unofficial implementation of ICCV 2021 paper "Graspness Discovery in Clutters for Fast and Accurate Grasp Detection"

License

Notifications You must be signed in to change notification settings

NYU-robot-learning/anygrasp_opensource

 
 

Repository files navigation

GraspNet graspness

Fork of implementation of paper "Graspness Discovery in Clutters for Fast and Accurate Grasp Detection" (ICCV 2021) by Zibo Chen.

[paper] [dataset] [API]

Requirements

  • Python 3
  • PyTorch 1.8
  • Open3d 0.8
  • TensorBoard 2.3
  • NumPy
  • SciPy
  • Pillow
  • tqdm
  • MinkowskiEngine

Installation

Get the code.

git clone https://github.com/rhett-chen/graspness_implementation.git
cd graspnet-graspness

Install packages via Pip.

pip install -r requirements.txt

Compile and install pointnet2 operators (code adapted from votenet).

cd pointnet2
python setup.py install

Compile and install knn operator (code adapted from pytorch_knn_cuda).

cd knn
python setup.py install

Install graspnetAPI for evaluation.

git clone https://github.com/graspnet/graspnetAPI.git
cd graspnetAPI
pip install .

For MinkowskiEngine, please refer https://github.com/NVIDIA/MinkowskiEngine

Point level Graspness Generation

Point level graspness label are not included in the original dataset, and need additional generation. Make sure you have downloaded the orginal dataset from GraspNet. The generation code is in dataset/generate_graspness.py.

cd dataset
python generate_graspness.py --dataset_root /data3/graspnet --camera_type kinect

Simplify dataset

original dataset grasp_label files have redundant data, We can significantly save the memory cost. The code is in dataset/simplify_dataset.py

cd dataset
python simplify_dataset.py --dataset_root /data3/graspnet

Training and Testing

Training examples are shown in command_train.sh. --dataset_root, --camera and --log_dir should be specified according to your settings. You can use TensorBoard to visualize training process.

Testing examples are shown in command_test.sh, which contains inference and result evaluation. --dataset_root, --camera, --checkpoint_path and --dump_dir should be specified according to your settings. Set --collision_thresh to -1 for fast inference.

Model Weights

We provide trained model weights. The model trained with RealSense data is available at Google drive (this model is recommended for real-world application). The model trained with Kinect data is available at Google drive.

Results

Results "In repo" report the model performance of my results without collision detection.

Evaluation results on Kinect camera:

Seen Similar Novel
AP AP0.8 AP0.4 AP AP0.8 AP0.4 AP AP0.8 AP0.4
In paper 61.19 71.46 56.04 47.39 56.78 40.43 19.01 23.73 10.60
In repo 61.83 73.28 54.14 51.13 62.53 41.57 19.94 24.90 11.02

Troubleshooting

If you meet the torch.floor error in MinkowskiEngine, you can simply solve it by changing the source code of MinkowskiEngine: MinkowskiEngine/utils/quantization.py 262,from discrete_coordinates =_auto_floor(coordinates) to discrete_coordinates = coordinates

Acknowledgement

My code is mainly based on Graspnet-baseline https://github.com/graspnet/graspnet-baseline.

About

Unofficial implementation of ICCV 2021 paper "Graspness Discovery in Clutters for Fast and Accurate Grasp Detection"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 77.7%
  • Cuda 12.7%
  • C++ 7.1%
  • C 2.3%
  • Shell 0.2%