You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I currenty face the problem, that the performance of my training varies strongly.
Sometimes, I have a usable caffemodel after training and sometimes not.
So I started a test. I picked 100 labeled images and run 3 short trainings with it, without changing anything (same data set, same learning rate of 0.001).
Results:
First run, "Radar09" with 50,000 iterations resulted in a loss of 0.001080. After 1,000 iterations, the loss was under 1.0
Second run, "Radar10" with 2.500 iterations resulted in a loss of 8.561
Third run, "Radar11" with 2.500 iterations resulted in a loss of 0.5212
Fourth run "Radar12" with 2.500 iterations resulted in a loss of 8.909
Why is this so?
What could be the reason?
How could I avoid this?
For a noticeably larger dataset and more iterations (3,000 images and 100,000 iterations) this will be a problem.
Thank you very much! :)
System configuration
Operating system: Ubuntu 16.04
CUDA version (if applicable): 8.0
CUDNN version (if applicable): 5.1
Python version (if using pycaffe): 2.7
The text was updated successfully, but these errors were encountered:
Ah. I see!
Thank you.
I will continue with random_seed: 20 from now on, but unfortunately I don't know, which seed it was for the "vital" trainings not stucking at 6.9
Dear community,
I currenty face the problem, that the performance of my training varies strongly.
Sometimes, I have a usable caffemodel after training and sometimes not.
So I started a test. I picked 100 labeled images and run 3 short trainings with it, without changing anything (same data set, same learning rate of 0.001).
Results:
First run, "Radar09" with 50,000 iterations resulted in a loss of 0.001080. After 1,000 iterations, the loss was under 1.0
Second run, "Radar10" with 2.500 iterations resulted in a loss of 8.561
Third run, "Radar11" with 2.500 iterations resulted in a loss of 0.5212
Fourth run "Radar12" with 2.500 iterations resulted in a loss of 8.909
Why is this so?
What could be the reason?
How could I avoid this?
For a noticeably larger dataset and more iterations (3,000 images and 100,000 iterations) this will be a problem.
Thank you very much! :)
System configuration
The text was updated successfully, but these errors were encountered: