-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Imbalanced GPU usage #293
Comments
aravind-h-v
pushed a commit
to aravind-h-v/mmsegmentation
that referenced
this issue
Mar 27, 2023
* initial ddpm for issue open-mmlab#293 * initial ddpm pipeline doc * added docstrings * Update docs/source/api/pipelines/ddpm.mdx Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * make style * fix docs * make style * fix doc strings Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
aravind-h-v
pushed a commit
to aravind-h-v/mmsegmentation
that referenced
this issue
Mar 27, 2023
aravind-h-v
pushed a commit
to aravind-h-v/mmsegmentation
that referenced
this issue
Mar 27, 2023
* ddim docs for issue open-mmlab#293 * space
aravind-h-v
pushed a commit
to aravind-h-v/mmsegmentation
that referenced
this issue
Mar 27, 2023
* karras-ve docs for issue open-mmlab#293 * make style
wjkim81
pushed a commit
to wjkim81/mmsegmentation
that referenced
this issue
Dec 3, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
In version 1.1.6, when using 2 GPUs and MMDataparallel non-distributed training DeepLabV3+R-101-D8 769x769 80000+cityscapes
Modify line 68 of mmseg/apis/train.py:
to
the GPU usage is as follows:
It seems that the GPU 1 has not loaded the data. What is the reason?Is it related to the train_step() function?
The text was updated successfully, but these errors were encountered: