Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] Fix bug in non-distributed multi-gpu training/testing #1247

Merged
merged 4 commits into from
Jan 28, 2022

Conversation

MengzhangLI
Copy link
Contributor

Related PR:
open-mmlab/mmdetection#7019
open-mmlab/mmaction2#1406
open-mmlab/mmflow#85

Since MMDP does not support non-distributed multi-GPU training, --gpus in train.py lost its role, so it is removed, --gpu-ids is changed to --gpu-id, because only one GPU can be specified for non-distribution training and testing. If the number of GPUs is more than 1, there will be an assertion error in MMDP because MMDP does not support the situation with more than one GPU.

@MeowZheng MeowZheng merged commit 622f28e into open-mmlab:master Jan 28, 2022
@MengzhangLI MengzhangLI deleted the fix_non_dist branch February 16, 2022 11:11
bowenroom pushed a commit to bowenroom/mmsegmentation that referenced this pull request Feb 25, 2022
…ab#1247)

* Fix bug in non-distributed training

* Fix bug in non-distributed testing

* delete uncomment lines

* add args.gpus
wjkim81 pushed a commit to wjkim81/mmsegmentation that referenced this pull request Dec 3, 2023
…-mmlab#1247)

* update installation of mmcv and pytorch in colab tutorial

* update cell output
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants