-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-gpu training problem #6
Comments
I tried to use the official MMengine distributed training command, but it reported an error.
|
I found the solution by adding |
Yes, you are right! I will update this soon. |
Using TORCH_DISTRIBUTED_DEBUG=DETAIL, you can find these unmatched grad and debug it. If using find_unused_parameters = True, whether increasing training time? |
Could you tell me where to add the find_unused_parameters=True parameter?I tried adding it in train.py, but encountered many issues. How did you modify it? I look forward to your reply. @w1oves @humian321 |
You can add it in the config file, just like the author added it at the end of the config file below |
Thanks,It works. |
This is a good paper and very interested idea! There is a training cmd using a single gpu in readme. For multi-gpus training, could you provide the corresponding cmd ?
The text was updated successfully, but these errors were encountered: