A CNN (GR-ConvNet) model that can generate robust antipodal grasps from RGB-D input images at real-time
- Clone this repository and install required libraries
git clone https://github.com/Loahit5101/GR-ConvNet-grasping.git
cd robotic-grasping
pip install -r requirements.txt
- Install TensorRT and Torch-TensorRT
Download and extract the Cornell Grasping dataset and run the following command: Cornell Grasping Dataset
python -m utils.dataset_processing.generate_cornell_depth <Path To Dataset>
Trained models are available here
python train.py
python test.py
- Post-training quantization
python ptq.py
- Benchmarking grasp inference time of optimized and unoptimized models
python trt_benchmark.py
Average Grasp Inference Time
Model | Time (ms) | Accuracy |
---|---|---|
Baseline | 4.59 | 95.5 |
FP-32 | 3.71 | 94.2 |
FP-16 | 1.45 | 93.16 |
Post-processing time = 14 ms on average