You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The box ap is not consistent.
Before change: 'copypaste: 42.0358,62.4804,45.8763,25.2248,45.5545,54.5923'
After change: 'copypaste: 42.0413,62.4595,45.9625,25.2514,45.5270,54.5579'
Expected behavior
If there are no obvious error in "what you observed" provided above,
please tell us the expected behavior.
I am wondering the batch_norm layer may have some problems.
I expect the evaluation results from same model on 'coco' with different batchsize should be same.
Environment
sys.platform linux
Python 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0]
Numpy 1.17.2
Detectron2 Compiler GCC 6.3
Detectron2 CUDA Compiler 10.0
DETECTRON2_ENV_MODULE
PyTorch 1.3.0+cu100
PyTorch Debug Build False
torchvision 0.4.1+cu100
CUDA available True
GPU 0,1,2,3,4,5,6,7 Tesla V100-SXM2-16GB
CUDA_HOME /usr/local/cuda
NVCC Cuda compilation tools, release 10.0, V10.0.130
Pillow 6.2.0
cv2 4.1.1
PyTorch built with:
GCC 7.3
Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications
If you do not know the root cause of the problem / bug, and wish someone to help you, please
include:
To Reproduce
'batch_sampler = torch.utils.data.sampler.BatchSampler(sampler, 1, drop_last=False)' to 'batch_sampler = torch.utils.data.sampler.BatchSampler(sampler, 4, drop_last=False)'
'CUDA_VISIBLE_DEVICES=0,1,2,3 python tools/plain_train_net.py --num-gpus=4 --config-file configs/COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml --eval-only --dist-url='tcp://127.0.0.1:62222' MODEL.WEIGHTS checkpoints/model_final_f6e8b1.pkl'
Before change: 'copypaste: 42.0358,62.4804,45.8763,25.2248,45.5545,54.5923'
After change: 'copypaste: 42.0413,62.4595,45.9625,25.2514,45.5270,54.5579'
Expected behavior
If there are no obvious error in "what you observed" provided above,
please tell us the expected behavior.
I am wondering the batch_norm layer may have some problems.
I expect the evaluation results from same model on 'coco' with different batchsize should be same.
Environment
sys.platform linux
Python 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0]
Numpy 1.17.2
Detectron2 Compiler GCC 6.3
Detectron2 CUDA Compiler 10.0
DETECTRON2_ENV_MODULE
PyTorch 1.3.0+cu100
PyTorch Debug Build False
torchvision 0.4.1+cu100
CUDA available True
GPU 0,1,2,3,4,5,6,7 Tesla V100-SXM2-16GB
CUDA_HOME /usr/local/cuda
NVCC Cuda compilation tools, release 10.0, V10.0.130
Pillow 6.2.0
cv2 4.1.1
PyTorch built with:
The text was updated successfully, but these errors were encountered: