Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Signed-off-by: Yiheng Wang vennw@nvidia.com
Fixes #1564 .
Description
Hi @rijobro @wyli @Nic-Ma @ericspod , after the discussion, I did some changes for the forward function of DynUNet. If we all used the list based return format, the default sliding window inferrer cannot work, thus I decided to return a single tensor for both train and eval modes. This change solves the DDP issue, and meet the restriction of TorchScript.
The change is that in deep supervision mode, all feature maps will be interpolated into the same size as the last feature map, and then stack together as a single tensor.
Therefore, in the loss calculation step, in the original implementation, the ground truth will be interpolated into each feature map's size, and then do the calculation. In this PR's implementation, the ground truth will be calculated with each interpolated feature map. These two ways have a little difference, but according to my simple test, the performance for task 04 will not be reduced.
Do you think we can change in this way?
Status
Ready
Types of changes
./runtests.sh --codeformat --coverage
../runtests.sh --quick
.make html
command in thedocs/
folder.