Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update pl to 1.4.4 #36

Merged
merged 3 commits into from
Sep 1, 2021
Merged

update pl to 1.4.4 #36

merged 3 commits into from
Sep 1, 2021

Conversation

lvoegtlin
Copy link
Contributor

@lvoegtlin lvoegtlin commented Aug 30, 2021

Description

Updated pl to 1.4.4 and the needed adaption on the current pipeline

How to Test/Run?

first install
pip install pytorch-lightning==1.4.4

than run
python run.py

@lvoegtlin lvoegtlin added If time No rush at all Module related to a task Pipeline The general Hydra system DataModule Related to a data module labels Aug 30, 2021
@lvoegtlin lvoegtlin requested a review from powl7 August 30, 2021 14:41
@lvoegtlin lvoegtlin self-assigned this Aug 30, 2021
@lvoegtlin lvoegtlin linked an issue Aug 30, 2021 that may be closed by this pull request
@lvoegtlin
Copy link
Contributor Author

lvoegtlin commented Aug 31, 2021

There are different kind of errors. They look like a race condition in the data model

Traceback (most recent call last):
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 986, in _try_get_data
    data = self._data_queue.get(timeout=timeout)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/queue.py", line 179, in get
    self.not_empty.wait(remaining)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/threading.py", line 306, in wait
    gotit = waiter.acquire(True, timeout)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 2243714) is killed by signal: Aborted.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1045, in _run_train
    self.fit_loop.run()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
    self.advance(*args, **kwargs)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 200, in advance
    epoch_output = self.epoch_loop.run(train_dataloader)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 112, in run
    self.on_advance_end()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 177, in on_advance_end
    self._run_validation()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 257, in _run_validation
    self.val_loop.run()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
    self.advance(*args, **kwargs)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 110, in advance
    dl_outputs = self.epoch_loop.run(
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
    self.advance(*args, **kwargs)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 93, in advance
    batch_idx, batch = next(dataloader_iter)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
    data = self._next_data()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1182, in _next_data
    idx, data = self._get_data()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1138, in _get_data
    success, data = self._try_get_data()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 999, in _try_get_data
    raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 2243714) exited unexpectedly
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "run.py", line 35, in <module>
    main()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/hydra/main.py", line 49, in decorated_main
    _run_hydra(
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/hydra/_internal/utils.py", line 367, in _run_hydra
    run_and_report(
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/hydra/_internal/utils.py", line 214, in run_and_report
    raise ex
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/hydra/_internal/utils.py", line 211, in run_and_report
    return func()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/hydra/_internal/utils.py", line 368, in <lambda>
    lambda: hydra.run(
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 110, in run
    _ = ret.return_value
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/hydra/core/utils.py", line 233, in return_value
    raise self._return_value
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/hydra/core/utils.py", line 160, in run_job
    ret.return_value = task_function(task_cfg)
  File "run.py", line 31, in main
    return train(config)
  File "/home/lars/unsuperwised_framwork/unsupervised_learning/src/train.py", line 119, in train
    trainer.fit(model=task, datamodule=datamodule)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit
    self._run(model)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run
    self._dispatch()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch
    self.accelerator.start_training(self)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
    self.training_type_plugin.start_training(trainer)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
    self._results = trainer.run_stage()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage
    return self._run_train()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1058, in _run_train
    self.training_type_plugin.reconciliate_processes(traceback.format_exc())
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 459, in reconciliate_processes
    raise DeadlockDetectedException(f"DeadLock detected from rank: {self.global_rank} \n {trace}")
pytorch_lightning.utilities.exceptions.DeadlockDetectedException: DeadLock detected from rank: 0
 Traceback (most recent call last):
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 986, in _try_get_data
    data = self._data_queue.get(timeout=timeout)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/queue.py", line 179, in get
    self.not_empty.wait(remaining)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/threading.py", line 306, in wait
    gotit = waiter.acquire(True, timeout)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 2243714) is killed by signal: Aborted.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1045, in _run_train
    self.fit_loop.run()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
    self.advance(*args, **kwargs)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 200, in advance
    epoch_output = self.epoch_loop.run(train_dataloader)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 112, in run
    self.on_advance_end()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 177, in on_advance_end
    self._run_validation()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 257, in _run_validation
    self.val_loop.run()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
    self.advance(*args, **kwargs)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 110, in advance
    dl_outputs = self.epoch_loop.run(
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
    self.advance(*args, **kwargs)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 93, in advance
    batch_idx, batch = next(dataloader_iter)
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
    data = self._next_data()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1182, in _next_data
    idx, data = self._get_data()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1138, in _get_data
    success, data = self._try_get_data()
  File "/home/lars/.conda/envs/unsupervised_learning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 999, in _try_get_data
    raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 2243714) exited unexpectedly

…e self.log() function which was using unexpectedly memory on the GPU
@lvoegtlin lvoegtlin removed the DataModule Related to a data module label Aug 31, 2021
Copy link
Contributor

@powl7 powl7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works!

@powl7 powl7 merged commit 478400d into dev Sep 1, 2021
@powl7 powl7 deleted the dev_35_update_pl branch September 1, 2021 08:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
If time No rush at all Module related to a task Pipeline The general Hydra system
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update PL to 1.4.2 or newer
2 participants