-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multistep training with batch_size >=1 per GPU #139
Conversation
…rs with correct settings
can you add some details? is this going to replace all trainers/datasets or just the multistep ones? |
I just added the single-step dataset. I am aiming for consolidation down to a single trainer with config options to specify what you need to do (single or multi). Working on that and should have it ready by end of week. So far the train_multistep.py in this PR has been tested with the new multi-step datasets. I will update documentation and what not once its all working. |
…rsal.py depcreation of many scripts coming soon
@kanz76 I think the bug involving batch size and history len is now corrected. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested
era5_multistep_batcher.py
(exceptMultiprocessingBatcherPrefetch
)load_dataset_and_dataloader.py
train_universal.py
They all look good!
No description provided.