You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have trained the proposed model on 2 GTX 1080Ti GPUs (a mini-batch contains 10 images per GPU), which is the setting in the paper. And, I trained for 240k iterations totally, with the initial learning rate of 0.0001 in 160k iteration and degrade the initial learning rate to 0.00001 in the last 80k iterations, but the miss rate is 14.52%. Where might the problem arise?
The text was updated successfully, but these errors were encountered:
I got the same problem. For the limit of GPU I set batch_size as 4 and train the model using 1 GTX 1080Ti. I trained the model for 150 epochs with init_lr of 1e-4, and chose the best performing one which is saved at about epoch 101. Then I degreed the init_lr by 10, i.e. 1e-5 and trained until converging. Finally I got best MR of around 14.59% on validation set. I wonder if there is any detail about the training process which I missed or if random inital weights do make some difference. @liuwei16@VideoObjectSearch
I have trained the proposed model on 2 GTX 1080Ti GPUs (a mini-batch contains 10 images per GPU), which is the setting in the paper. And, I trained for 240k iterations totally, with the initial learning rate of 0.0001 in 160k iteration and degrade the initial learning rate to 0.00001 in the last 80k iterations, but the miss rate is 14.52%. Where might the problem arise?
The text was updated successfully, but these errors were encountered: