Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Number of epochs not centralized #2

Open
csmagnano opened this issue Dec 12, 2022 · 0 comments
Open

Number of epochs not centralized #2

csmagnano opened this issue Dec 12, 2022 · 0 comments

Comments

@csmagnano
Copy link
Collaborator

csmagnano commented Dec 12, 2022

Currently we set the number of epochs to run gradient descent for in different places in the code for different experiments, making it difficult to keep track of how many epochs are being run or where it can be set.

We set it:
databasePrediction/localizationTuningAx.py line 34 - This is set to 1000 epochs which we use for tuning.
databasePrediction/localizationPyTorchGeo.py line 239 - This is set to 2000 epochs which we use for full training post tuning.
caseStudy/trainCaseStudy.py line 258 - This is set to 300 epochs which we use for the case study (since the datasets are smaller).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant