Validation loss smaller than training loss - Detectron2 #5304
Unanswered
misabellerv
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello! I am running a project using Detectron2 (object detection with
faster_rcnn_R_50_FPN_1x
), where I have training and validation data. When I check the metrics JSON file, the training loss is always higher than the validation loss! Validation also converges very fast while training loss is still very high.I thought maybe my data was badly distributed, but now I suspect that it's something about my code in
LossEvalHook.py
andtrain.py
.There is this thread on GitHub about printing validation loss, and I tried to use the codes from there. Although it prints validation loss at each iteration, it's always smaller than the training loss!
Could someone help me, please? =)
Here is
LossEvalHook.py
:Here is my train.py:
You can see here the training/validation loss curve:
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions