-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Support mmseg with NPU backend. #2768
Conversation
Hi @luomaoling, |
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #2768 +/- ##
==========================================
- Coverage 88.36% 88.20% -0.17%
==========================================
Files 149 149
Lines 9109 9128 +19
Branches 1523 1527 +4
==========================================
+ Hits 8049 8051 +2
- Misses 810 826 +16
- Partials 250 251 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report in Codecov by Sentry. |
mmseg/utils/util_distribution.py
Outdated
@@ -33,6 +33,14 @@ def build_dp(model, device='cuda', dim=0, *args, **kwargs): | |||
dp_factory['mlu'] = MLUDataParallel | |||
model = model.mlu() | |||
|
|||
if device == 'npu': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might add the elif
statement to the code above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've revised it, thanks
mmseg/utils/util_distribution.py
Outdated
@@ -53,7 +61,8 @@ def build_ddp(model, device='cuda', *args, **kwargs): | |||
.. [1] https://pytorch.org/docs/stable/generated/torch.nn.parallel. | |||
DistributedDataParallel.html | |||
""" | |||
assert device in ['cuda', 'mlu'], 'Only available for cuda or mlu devices.' | |||
assert device in ['cuda', 'mlu', 'npu'], 'Only available for cuda '\ | |||
'or mlu devices.' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might add npu
to the error msg.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've revised it, thanks
mmseg/utils/util_distribution.py
Outdated
@@ -63,6 +72,14 @@ def build_ddp(model, device='cuda', *args, **kwargs): | |||
ddp_factory['mlu'] = MLUDistributedDataParallel | |||
model = model.mlu() | |||
|
|||
if device == 'npu': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might use elif
statement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've revised it, thanks
tools/test.py
Outdated
optimiter_config = dict(type='Fp16OptimizerHook', loss_scale='dynamic') | ||
cfg.optimizer_config = optimiter_config if \ | ||
not cfg.optimizer_config else cfg.optimizer_config |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when model testing, there is no runner and no hooks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've revised it, thanks!
This reverts commit ae78cb9.
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
Motivation
Added ascending device support in mmseg.
Modification
The main modification points are as follows:
We added an NPU device in the DDP scenario and DP scenario when using the NPU.
BC-breaking (Optional)
Does the modification introduce changes that break the backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
None
Use cases (Optional)
We tested fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes.py .
Checklist