You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a fix to nyu-mll/jiant#1087. I decided to make the change in the model loading portion because making the change in model saving as suggested in the nyu-mll/jiant#1087 will fix multi -> single GPU model loading, but will break multi -> multi GPU model loading (if we want to reload a checkpoint that was trained in multi-GPU on a multi-GPU machine).
Additionally, I also did some light cleanup of model loading in trainer to not be redundant, and also deleted an unused parameter.
Tests
Multi -> Single GPU: I tested by training a roberta-large model on SST on multi-GPU, and then loading that checkpoint in a single-GPU for further training.
Multi -> Multi GPU: This is implicitly already done in jiant, specifically we load the best checkpoint before doing evaluation, so this was tested when I trained the roberta-large SST model the first time on multi-GPU.
You can repair most issues by installing black and running: black -l 100 ./*. If you contribute often, have a look at the 'Contributing' section of the README for instructions on doing this automatically.
Comment by pyeres Tuesday May 19, 2020 at 19:00 GMT
Hi @zphang & @HaokunLiu — are either of you available to provide the substantial review for this PR? The core concerns seem to be 1) whether this addresses issue #1087, and 2) whether these changes introduce new risks/regressions.
Issue by pruksmhc
Sunday May 17, 2020 at 17:32 GMT
Originally opened as nyu-mll/jiant#1091
This is a fix to nyu-mll/jiant#1087. I decided to make the change in the model loading portion because making the change in model saving as suggested in the nyu-mll/jiant#1087 will fix multi -> single GPU model loading, but will break multi -> multi GPU model loading (if we want to reload a checkpoint that was trained in multi-GPU on a multi-GPU machine).
Additionally, I also did some light cleanup of model loading in trainer to not be redundant, and also deleted an unused parameter.
Tests
Multi -> Single GPU: I tested by training a roberta-large model on SST on multi-GPU, and then loading that checkpoint in a single-GPU for further training.
Multi -> Multi GPU: This is implicitly already done in jiant, specifically we load the best checkpoint before doing evaluation, so this was tested when I trained the roberta-large SST model the first time on multi-GPU.
pruksmhc included the following code: https://github.com/nyu-mll/jiant/pull/1091/commits
The text was updated successfully, but these errors were encountered: