You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the num_training_batches is set to inf when the dataset is in iterable-style, which may lead to this error:
Traceback (most recent call last):
File "scripts/msmacro.py", line 119, in <module>
main()
File "scripts/msmacro.py", line 115, in main
trainer.fit(model)
File "/home/zhaohao/.anaconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 417, in fit
self.run_pretrain_routine(model)
File "/home/zhaohao/.anaconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 481, in run_pretrain_routine
self.get_dataloaders(ref_model)
File "/home/zhaohao/.anaconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py", line 199, in get_dataloaders
self.init_train_dataloader(model)
File "/home/zhaohao/.anaconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py", line 78, in init_train_dataloader
self.val_check_batch = int(self.num_training_batches * self.val_check_interval)
OverflowError: cannot convert float infinity to integer
workaround: set val_check_interval to an integer.
However, if the validation dataset is also in iterable style, then the following error will be raised, as there is no dataset type check in loading validation dataset
Traceback (most recent call last):
File "scripts/msmacro.py", line 119, in <module>
main()
File "scripts/msmacro.py", line 115, in main
trainer.fit(model)
File "/home/zhaohao/Documents/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 417, in fit
self.run_pretrain_routine(model)
File "/home/zhaohao/Documents/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 481, in run_pretrain_routine
self.get_dataloaders(ref_model)
File "/home/zhaohao/Documents/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 201, in get_dataloaders
self.init_val_dataloader(model)
File "/home/zhaohao/Documents/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 117, in init_val_dataloader
self.num_val_batches = sum(len(dataloader) for dataloader in self.get_val_dataloaders())
File "/home/zhaohao/Documents/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 117, in <genexpr>
self.num_val_batches = sum(len(dataloader) for dataloader in self.get_val_dataloaders())
File "/home/zhaohao/.anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 297, in __len__
return len(self._index_sampler) # with iterable-style dataset, this will error
File "/home/zhaohao/.anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 212, in __len__
return (len(self.sampler) + self.batch_size - 1) // self.batch_size
File "/home/zhaohao/.anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 57, in __len__
raise TypeError('Cannot determine the DataLoader length of a IterableDataset')
TypeError: Cannot determine the DataLoader length of a IterableDataset
The text was updated successfully, but these errors were encountered:
🐛 Bug
Currently, the
num_training_batches
is set toinf
when the dataset is in iterable-style, which may lead to this error:workaround: set
val_check_interval
to an integer.However, if the validation dataset is also in iterable style, then the following error will be raised, as there is no dataset type check in loading validation dataset
The text was updated successfully, but these errors were encountered: