-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fix] Change self.loss_decode
back to dict
in Single Loss situation.
#1002
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #1002 +/- ##
==========================================
- Coverage 89.85% 89.75% -0.11%
==========================================
Files 118 118
Lines 6558 6578 +20
Branches 1019 1024 +5
==========================================
+ Hits 5893 5904 +11
- Misses 464 473 +9
Partials 201 201
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…on. (open-mmlab#1002) * fix single loss type * fix error in ohem & point_head * fix coverage miss * fix uncoverage error of PointHead loss * fix coverage miss * fix uncoverage error of PointHead loss * nn.modules.container.ModuleList to nn.ModuleList * more simple format * merge unittest def
In previous version of MMSegmentation, the
loss_decode
is a dict.When using multiple losses pipeline, it is changed to
nn.ModuleList()
where each loss would be appended. It would raise BC-Breaking problem.Thus, we change it back to dict of single loss situation.