You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed this issue while training a iresnet. At the end of the training, the accuracy shown on the 5 validation dataset are dramatically close to 0.5. I thus added a print call at line 175 of train_val.py: print(f"{k} : {v}", flush=True). By doing so I realized that the model's performance is close to random at each epoch, during the validation step. But when running the evaluation after the training (i.e. by running main with --evaluate ), the accuracies are much more coherent (i.e. much better).
I do think there is an issue while loading the current model's weights at the validation steps during training.
The text was updated successfully, but these errors were encountered:
I noticed this issue while training a iresnet. At the end of the training, the accuracy shown on the 5 validation dataset are dramatically close to 0.5. I thus added a print call at line 175 of train_val.py: print(f"{k} : {v}", flush=True). By doing so I realized that the model's performance is close to random at each epoch, during the validation step. But when running the evaluation after the training (i.e. by running main with --evaluate ), the accuracies are much more coherent (i.e. much better).
I do think there is an issue while loading the current model's weights at the validation steps during training.
The text was updated successfully, but these errors were encountered: