You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we set the number of epochs to run gradient descent for in different places in the code for different experiments, making it difficult to keep track of how many epochs are being run or where it can be set.
Currently we set the number of epochs to run gradient descent for in different places in the code for different experiments, making it difficult to keep track of how many epochs are being run or where it can be set.
We set it:
databasePrediction/localizationTuningAx.py line 34 - This is set to 1000 epochs which we use for tuning.
databasePrediction/localizationPyTorchGeo.py line 239 - This is set to 2000 epochs which we use for full training post tuning.
caseStudy/trainCaseStudy.py line 258 - This is set to 300 epochs which we use for the case study (since the datasets are smaller).
The text was updated successfully, but these errors were encountered: