A reproduction of DensePose by PaddlePaddle
并没有使用facebook的代码,而是https://github.com/MILVLG/bottom-up-attention.pytorch
首先准备数据集 dataset 格式如下:
datasets/coco/ annotations/ densepose_{train,minival,valminusminival}2014.json densepose_minival2014_100.json (optional, for testing only) {train,val}2014/ # image files that are mentioned in the corresponding json
训练过程先略去
(运行代码和facebook的detectron2_0.6版本有些不同) (1) Show bounding box and segmentation:
python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml model_final_162be9.pkl 1.jpg dp_segm,bbox --output 1_segm.png
- Show bounding box and estimated U coordinates for body parts:
python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml model_final_162be9.pkl 1.jpg dp_u,bbox --output 1_u.png
- Show bounding box and estimated V coordinates for body parts:
python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml model_final_162be9.pkl 1.jpg dp_v,bbox --output 1_v.png
- Show bounding box and estimated U and V coordinates via contour plots:
python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml model_final_162be9.pkl 1.jpg dp_contour,bbox --output 1_contour.png