-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixing tests #936
Fixing tests #936
Conversation
@@ -177,14 +177,19 @@ def reset_train_dataloader(self, model): | |||
self.is_iterable_train_dataloader = ( | |||
EXIST_ITER_DATASET and isinstance(self.train_dataloader.dataset, IterableDataset) | |||
) | |||
if self.is_iterable_train_dataloader and not isinstance(self.val_check_interval, int): | |||
if self.is_iterable_dataloader(self.train_dataloader) and not isinstance(self.val_check_interval, int): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems wrong. it needs to be self.is_iterable_train_dataloader as defined in 177
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or remove 177
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and what if thou train with another dataloader, checking if it is updated...
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/trainer.py#L857
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't understand what you mean
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at what point is/was determined if the train dataset is iterable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, found it and it would be fine as it was... shall I rever ti?
@Borda the refactors are nice, so maybe this PR is about refactoring? |
# when testing make sure user defined a test step | ||
if test and not (self.is_overriden('test_step') or self.is_overriden('test_end')): | ||
if test_mode and not (self.is_overriden('test_step') or self.is_overriden('test_end')): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should make sure after we merge #938 into master that we update the pr here so that auto-merge doesn't reset this line again.
@Borda unblocked. rebase so we can merge? |
there was something wrong, your tests were passing just with caching luck... |
@Borda the tests passed on GPUs without any caching... |
the test is done that is using the same MNIS cache, once you download, it is used for all test... |
prepare_data goes beyond tests... it's used for DDP and TPU. |
it seems that we are talking a bit a diff thing...
the case here was that Really surprised that it was passing on your side lol :] |
sure. |
* abs import * rename test model * update trainer * revert test_step check * move tags * fix test_step * clean tests * fix template * update dataset path * fix parent order
What does this PR do?
Fixes #918, some changes was not properly propagated
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃