Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate from gpus flag (to be deprecated in PL 1.7) to accelerator and devices #78

Merged
merged 1 commit into from
Sep 5, 2022

Conversation

nathanpainchaud
Copy link
Member

@nathanpainchaud nathanpainchaud commented Aug 30, 2022

Link to the PR that deprecated these flags, which got released in Lightning 1.7.0.

@lemairecarl and @ThierryJudge I'm tagging you as reviewer not because I expect a lot of feedback, but more as a warning about the change, since it's one of the few Trainer flags that I think you might use on your end.

@nathanpainchaud nathanpainchaud added the enhancement New feature or request label Aug 30, 2022
@nathanpainchaud nathanpainchaud self-assigned this Aug 30, 2022
@nathanpainchaud nathanpainchaud force-pushed the feature/update-trainer-config branch from 92a243b to 41a9ea5 Compare August 30, 2022 09:17
@@ -1,3 +1,4 @@
_target_: pytorch_lightning.Trainer
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will be the default behaviour now ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With this config the default behavior should remain the same. accelerator='auto' will chose 'gpu' if one is available (or other types of accelerators like TPUs, etc., but will fallback to CPU otherwise. And devices is not set, by default it will try to use every available acccelerator on the machine. I did wonder if we would want to set devices to 1 to only use a single GPU in case of multi-GPU setup (like Beluga). But this is also something we could override in experiments configs that target clusters (since I'm already overriding some otherwise generic parameters like num_workers, it wouldn't be unheard of).

@nathanpainchaud
Copy link
Member Author

@ThierryJudge, since you've not continued the discussion, I'll assume you've agreed to the slight change in default behavior, and I'll merge this PR later today unless I hear back from you in the meantime.

@nathanpainchaud nathanpainchaud merged commit 9797673 into dev Sep 5, 2022
@nathanpainchaud nathanpainchaud deleted the feature/update-trainer-config branch September 5, 2022 23:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants