You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In torchvision/models/detection/_utils.py, function encode_boxes
I'm confused about the decorator torch.jit.script. I printed the proposals' datatype, but the output was 6, instead of torch.float32 or torch.float16. So I removed the decorator.
Additional context
I tested the speed of maskrcnn_resnet50_fpn with and without autocast().
Dataset: VOC 2012 Segmentation, train 1463 images, val 1444 images.
GPU
train/test FPS without AMP
train/test FPS with AMP
increase
2080 Ti
7.4/13.3
9.3/15.2
25.2%/14.6%
1080 Ti
5.1/8.8
4.2/7.3
--
The text was updated successfully, but these errors were encountered:
🚀 Feature
Now PyTorch 1.6.0 has torch.cuda.amp.autocast, I think we can make R-CNN models support Automatic Mixed Precision (AMP).
Motivation
When AMP is enabled, the training speed may increase ~20% on GPUs that support FP16.
Alternatives
There are 2 modifications:
torchvision/ops/roi_align.py
, functionroi_align
rois' datatype should be the same as input's datatype. So I replace
with
torchvision/models/detection/_utils.py
, functionencode_boxes
I'm confused about the decorator
torch.jit.script
. I printed the proposals' datatype, but the output was 6, instead of torch.float32 or torch.float16. So I removed the decorator.Additional context
I tested the speed of maskrcnn_resnet50_fpn with and without autocast().
Dataset: VOC 2012 Segmentation, train 1463 images, val 1444 images.
The text was updated successfully, but these errors were encountered: