Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modify dynunet forward function #1596

Merged
merged 2 commits into from
Feb 19, 2021

Conversation

yiheng-wang-nv
Copy link
Contributor

@yiheng-wang-nv yiheng-wang-nv commented Feb 19, 2021

Signed-off-by: Yiheng Wang vennw@nvidia.com

Fixes #1564 .

Description

Hi @rijobro @wyli @Nic-Ma @ericspod , after the discussion, I did some changes for the forward function of DynUNet. If we all used the list based return format, the default sliding window inferrer cannot work, thus I decided to return a single tensor for both train and eval modes. This change solves the DDP issue, and meet the restriction of TorchScript.

The change is that in deep supervision mode, all feature maps will be interpolated into the same size as the last feature map, and then stack together as a single tensor.

Therefore, in the loss calculation step, in the original implementation, the ground truth will be interpolated into each feature map's size, and then do the calculation. In this PR's implementation, the ground truth will be calculated with each interpolated feature map. These two ways have a little difference, but according to my simple test, the performance for task 04 will not be reduced.

Do you think we can change in this way?

Status

Ready

Types of changes

  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh --codeformat --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

@yiheng-wang-nv yiheng-wang-nv marked this pull request as ready for review February 19, 2021 15:43
Copy link
Contributor

@wyli wyli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, to me this solution looks nice. just wanted to note that interpolate takes some additional arguments https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.interpolate but they are not tunable in this PR. perhaps we could expand the API to support those if we have further feature requests

Copy link
Contributor

@Nic-Ma Nic-Ma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.
Please also update the DynUNet tutorial notebook.

Thanks.

@wyli wyli merged commit 3f16c21 into Project-MONAI:master Feb 19, 2021
@yiheng-wang-nv yiheng-wang-nv deleted the 1564-dynunet-change-featmap branch February 22, 2021 05:16
@yiheng-wang-nv
Copy link
Contributor Author

Looks good to me.
Please also update the DynUNet tutorial notebook.

Thanks.

Thanks @Nic-Ma @rijobro and @wyli for the review. I submitted a PR to update the tutorial as you can see here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

DynUnet get_feature_maps() issue when using distributed training
3 participants