-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add tests for single scalar return from training #2587
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2587 +/- ##
======================================
Coverage 91% 91%
======================================
Files 70 70
Lines 5755 5763 +8
======================================
+ Hits 5243 5252 +9
+ Misses 512 511 -1 |
Hello @williamFalcon! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2020-07-11 21:26:56 UTC |
self.training_step_end_called = True | ||
|
||
# make sure loss has the grad | ||
assert isinstance(output, torch.Tensor) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks like a test, but why is it in DeterministicModel
well maybe rather ask why these tests are not part of the template?
assert len(trainer.progress_bar_metrics) == 0 | ||
|
||
# make sure training outputs what is expected | ||
for batch_idx, batch in enumerate(model.train_dataloader()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why you do not set directly?
batch_idx = 0
batch = model.train_dataloader()[0]
model.training_step_end = model.training_step_end_scalar | ||
model.val_dataloader = None | ||
|
||
trainer = Trainer(fast_dev_run=True, weights_summary=None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing tmpdir
assert not model.training_step_end_called | ||
assert not model.training_epoch_end_called | ||
|
||
# make sure training outputs what is expected |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all these asserts bellow seems to be the same in all three functions... rather wrap into single assert block, otherwise, it takes some time to check what is the difference...
add tests to make sure a single scalar can be returned from train loop