You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why is this happening? The results in test.py are much higher than those obtained during validation after adjusting the number of epochs, and this phenomenon occurs in most epochs. The results from test.py are extremely ideal and do not match the actual performance. Below is a portion of the output logs.
为什么会这样?test中的结果比改epoch后进行验证的结果高了很多,而且大多数epoch都有这样的现象,test.py的结果极其理想,与实际不符。下面是部分输出日志
Evaluation Output of train.py on the Validation Set at Epoch 115
👋 Hello @3210448723, thank you for bringing this to our attention and for your interest in YOLOv5 🚀!
It seems you're encountering differences in evaluation metrics between train.py and test.py. This discrepancy might arise due to differences in how the evaluation is performed during training versus testing. To assist you better, could you please share a minimum reproducible example (MRE)? This should include:
The exact commands you used for both train.py and test.py.
Relevant portions of your dataset or configuration files.
Specific details about your training and testing pipelines (e.g., augmentations, hyperparameters, evaluation settings).
Versions of YOLOv5, Python, and PyTorch being used.
Additionally, ensure that your environment satisfies these minimum requirements:
Python>=3.8.0
All dependencies installed as per the requirements.txt included in the repository
PyTorch>=1.8 and correctly set up CUDA (if using GPU)
If applicable, confirm whether you are running YOLOv5 in a local environment or in a cloud-based environment (such as Colab, Paperspace, etc.).
This is an automated response to help guide resolution, and an Ultralytics engineer will assist you further soon. Let us know if you need additional clarification! 😊
@3210448723 the significant differences in evaluation results between train.py and test.py likely stem from differences in evaluation configurations, such as augmentation settings, confidence thresholds, or IoU thresholds. During training, train.py typically uses validation with partial augmentations and real-time adjustments, while test.py evaluates the model in a purely inference-focused environment without training-specific nuances.
To investigate further:
Ensure both scripts use consistent configurations for evaluation (e.g., --augment, imgsz, conf_thres, iou_thres).
Check if the dataset and preprocessing steps are identical for both scripts.
Confirm the test.py command is evaluating the same checkpoint as the one saved during training.
For additional details on validation differences, consult the YOLOv5 validation documentation. Let me know if you need further clarification!
3210448723
changed the title
Significant Differences in Evaluation Results on the Validation Set Between train.py During Training and test.py in [YOLOv5 5.0](https://github.com/ultralytics/yolov5/releases/tag/v5.0)
Significant Differences in Evaluation Results on the Validation Set Between train.py During Training and test.py in YOLOv5 5.0
Jan 10, 2025
Search before asking
Question
YOLOv5 5.0版本在train.py训练过程中在验证集上的评估结果与test.py在验证集上的评估结果具有显著差异
Why is this happening? The results in
test.py
are much higher than those obtained during validation after adjusting the number of epochs, and this phenomenon occurs in most epochs. The results fromtest.py
are extremely ideal and do not match the actual performance. Below is a portion of the output logs.为什么会这样?test中的结果比改epoch后进行验证的结果高了很多,而且大多数epoch都有这样的现象,test.py的结果极其理想,与实际不符。下面是部分输出日志
Evaluation Output of
train.py
on the Validation Set at Epoch 115Evaluation Output of
test.py
on the Validation Set for the Model at Epoch 115Additional
No response
The text was updated successfully, but these errors were encountered: