Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge master #372

Merged
merged 428 commits into from
Feb 18, 2022
Merged
Changes from 1 commit
Commits
Show all changes
428 commits
Select commit Hold shift + click to select a range
5c8f3bb
[DLMED] fix fc error (#3324)
Nic-Ma Nov 12, 2021
6f657a4
Add affine param to Affine transform (#3313) (#3318)
Spenhouet Nov 12, 2021
d35759a
3296 adds a flag for invertible on/off, and decouples transform stack…
wyli Nov 12, 2021
9ddc9e6
3313 Add `affine` arg to the dict transform (#3326)
Nic-Ma Nov 12, 2021
e148cfd
3284 torch version check (#3285)
wyli Nov 12, 2021
af21f11
3335 update clang-format download (#3340)
wyli Nov 15, 2021
8a454bd
fixes #3331 (#3334)
wyli Nov 15, 2021
4d83fc0
[DLMED] enhance metrics in workflow (#3341)
Nic-Ma Nov 15, 2021
9ec1d14
MIL component to extract patches (#3237)
myron Nov 15, 2021
45d4a61
MIL Component - MILModel (#3236)
myron Nov 16, 2021
4c2c1dc
3329 compatibility with pathlike obj (#3332)
wyli Nov 16, 2021
b7daf80
3292 Add support for no keys in all the transforms when allow_missing…
Nic-Ma Nov 16, 2021
0a7a4f1
[DLMED] refine doc (#3349)
Nic-Ma Nov 17, 2021
b86d751
fixes mednist dataset (#3348)
wyli Nov 17, 2021
33b6d61
close file (#3342)
wyli Nov 17, 2021
2e83cd2
3350 drops pytorch 1.5.x support (#3353)
wyli Nov 18, 2021
bf55246
3322 Optimize performance of the astype usage in arrays (#3338)
Nic-Ma Nov 18, 2021
6a20052
3346 Simplify AsDiscrete transform (#3352)
Nic-Ma Nov 18, 2021
c3fd5a9
Fix casting for tile step (#3355)
bhashemian Nov 18, 2021
f7e4cc3
remove dynunetv1 blocks (#3360)
wyli Nov 18, 2021
79bca25
3357 Add wrap_sequence to several transforms (#3358)
Nic-Ma Nov 19, 2021
d89807f
[DLMED] enhance doc-string (#3363)
Nic-Ma Nov 19, 2021
a4e7ba0
Update setup.cfg with tifffile and imagecodes (#3362)
bhashemian Nov 19, 2021
9048166
unit tests for step (#3361)
myron Nov 19, 2021
e36fbd3
backward compatible as discrete (#3367)
wyli Nov 19, 2021
b01c63e
Smooth field (#3252)
ericspod Nov 19, 2021
73d9afa
adds dints blocks (#3372)
dongyang0122 Nov 19, 2021
5e54d3e
Fixes typos in documentation (#3364)
wyli Nov 20, 2021
d0e72fb
[DLMED] add frame_dim (#3369)
Nic-Ma Nov 20, 2021
37930a9
3370 option of zip_longest to decollate (#3373)
wyli Nov 22, 2021
5adf30b
Numpy Implementation of SplitOnGrid (#3378)
bhashemian Nov 22, 2021
5d87b23
`TileOnGrid` support for `Tensor` input (#3384)
bhashemian Nov 23, 2021
0cfecbb
Fix SplitOnGrid issue (#3386)
bhashemian Nov 23, 2021
9cc339d
3387 fix threshold value issue in AsDiscrete (#3388)
Nic-Ma Nov 23, 2021
5f68a22
Adding repeats option to ThreadDataLoader (#3389)
ericspod Nov 23, 2021
839f7ef
fixes readme links (#3380)
wyli Nov 23, 2021
721f6f7
3387 Fix 0.0 or None value in AsDiscrete (#3393)
Nic-Ma Nov 24, 2021
8c90c59
add dints model (#3344)
dongyang0122 Nov 24, 2021
a640dde
3368 add `frame_dim` to TensorBoard plot utility (#3385)
Nic-Ma Nov 25, 2021
058bb40
3366 - whats new in 0.8 (#3383)
wyli Nov 25, 2021
d4c1bbe
update changelog for v0.8.0 (#3382)
wyli Nov 25, 2021
3915492
3365 update highlight 0.8 (#3381)
Nic-Ma Nov 25, 2021
dca8b1c
[DLMED] postpone remove version (#3399)
Nic-Ma Nov 25, 2021
d625b61
fixes typos/style in dints.py (#3395)
wyli Nov 25, 2021
714d00d
3404 fixes crop types (#3403)
wyli Nov 25, 2021
1763381
update weekly build prefix (#3406)
wyli Nov 26, 2021
4a3db69
3405 Enhance doc-string of MIL model for data shape (#3407)
Nic-Ma Nov 26, 2021
7d4386f
fixes docstring ref. (#3408)
wyli Nov 26, 2021
8a1454a
3398 Support 3D RGB image in matshow3d (#3400)
Nic-Ma Nov 26, 2021
12f74d3
3402 Add support to set other pickle related args (#3412)
Nic-Ma Nov 29, 2021
071264d
fixes pickle mod (#3416)
wyli Nov 30, 2021
ff9bbfa
[DLMED] add args and update default (#3418)
Nic-Ma Nov 30, 2021
fc9abb9
3392 enhance reduction doc-string for Nan values (#3424)
Nic-Ma Dec 1, 2021
0b077da
Passing the activation function of the SegResNet to the Residual Bloc…
patricio-astudillo Dec 1, 2021
f94e768
3194 update vitautoenc for 2d (#3420)
wyli Dec 1, 2021
ad0f866
3415 Update WSIReader (#3417)
bhashemian Dec 2, 2021
035f91c
update create_file_basename (#3436)
wyli Dec 2, 2021
f6a0c87
Update TiffFile backend in WSIReader (#3438)
bhashemian Dec 2, 2021
0935d5a
3429 Enhance the scalar write logic of TensorBoardStatsHandler (#3431)
Nic-Ma Dec 3, 2021
f3e7cc0
3430 support dataframes and streams in CSVDataset (#3440)
Nic-Ma Dec 6, 2021
8a47227
Add base class for workflows (#3445)
Nic-Ma Dec 7, 2021
a17813b
Enhance deprecated arg for kwargs in CSV datasets (#3446)
Nic-Ma Dec 7, 2021
98c1c43
3293 Remove extra deep supervision modules of DynUNet (#3427)
yiheng-wang-nv Dec 7, 2021
29e9ab3
471- fixes deprecated args (#3447)
wyli Dec 7, 2021
a7bc4a3
improve error message if reader nott available (#3457)
rijobro Dec 8, 2021
d2543ad
adds the missing imports (#3462)
wyli Dec 9, 2021
82576e7
revise MILModel docstring (#3459)
wyli Dec 9, 2021
4b5ad0b
deprecate reduction (#3464)
wyli Dec 9, 2021
5b36d02
3444 Add DatasetFunc (#3456)
Nic-Ma Dec 10, 2021
89f0bd4
Add missing components to API doc (#3468)
Nic-Ma Dec 10, 2021
d386213
3466 3467 Add `channel_wise` and correct doc-string (#3469)
Nic-Ma Dec 12, 2021
a4c1300
[DLMED] remove cls (#3475)
Nic-Ma Dec 13, 2021
360c52f
Add Iteration base class (#3472)
Nic-Ma Dec 13, 2021
cb70e13
fix link error (#3488)
yiheng-wang-nv Dec 14, 2021
9ef0070
3465 Support string as dtype (#3478)
Nic-Ma Dec 14, 2021
2cbb54c
Add Iteration base class (#3472)
Nic-Ma Dec 13, 2021
45af170
fix link error (#3488)
yiheng-wang-nv Dec 14, 2021
9ccc5ee
3465 Support string as dtype (#3478)
Nic-Ma Dec 14, 2021
7a66f18
[DLMED] update to 0.4.7 (#3483)
Nic-Ma Dec 14, 2021
f6648a3
Improve NVTX Range Naming (#3484)
bhashemian Dec 14, 2021
9927691
3471 3491 Add example images for intensity transforms (#3494)
Nic-Ma Dec 15, 2021
88d91d6
Make bending energy loss invariant to resolution (#3493)
ebrahimebrahim Dec 15, 2021
3761ee0
Removes redundant casting -- ensure tuple (#3495)
wyli Dec 16, 2021
a255648
3498 Correct `kwargs` arg for `convert_to_torchscript` (#3499)
Nic-Ma Dec 16, 2021
531c276
update tests with 1.10.1 (#3500)
wyli Dec 17, 2021
d663d78
3501 Add dict version SavitzkyGolaySmoothd (#3502)
Nic-Ma Dec 17, 2021
d2861a2
remove file (#3507)
wyli Dec 20, 2021
29779d1
avoid 60.0.0 (#3514)
wyli Dec 20, 2021
c9d38f0
[DLMED] add 6 new transform images (#3512)
Nic-Ma Dec 20, 2021
1d3a9d7
support of reversed indexing (#3508)
wyli Dec 20, 2021
9734478
Adding Torchscript utility functions (#3138)
ericspod Dec 21, 2021
8e2ade5
3517 Refine AddCoordinateChannels transform (#3524)
Nic-Ma Dec 21, 2021
1516ca7
3521 Copyright header update (#3522)
wyli Dec 21, 2021
299a0d1
3521 - adds a util to check the licence info (#3523)
wyli Dec 21, 2021
58ded72
3350 Remove PyTorch 1.5.x related logic and mark versions for all new…
Nic-Ma Dec 21, 2021
08f9bac
3533 Update PyTorch docker to 21.12 (#3534)
Nic-Ma Dec 22, 2021
9caa1d0
3531 Add args to subclass of CacheDataset (#3532)
Nic-Ma Dec 22, 2021
6767959
3525 Fix invertible issue in OneOf compose (#3530)
Nic-Ma Dec 22, 2021
e655b4e
3535 - drop python 36 support (#3536)
wyli Dec 23, 2021
21c5f6d
3541 has cupy check (#3544)
wyli Dec 24, 2021
7f23f38
3053 release *_dist.py tests memory to avoid OOM (#3537)
wyli Dec 24, 2021
29bc93b
3539 Remove decollate warning (#3545)
Nic-Ma Dec 24, 2021
df75199
[DLMED] enhance set_determinism (#3547)
Nic-Ma Dec 24, 2021
2e492a9
Smooth Deform (#3551)
ericspod Dec 30, 2021
05f8219
3552 - runtest.sh defaults to no build/install (#3555)
wyli Dec 30, 2021
c8e7fe1
Remove apply_same_field (#3556)
ericspod Dec 30, 2021
885d5b9
skipping pretraining network loading when downloading is unsuccessful…
wyli Dec 31, 2021
ba23afd
[DLMED] fix mypy errors (#3562)
Nic-Ma Jan 1, 2022
adca1bb
3559 Enhance `DatasetSummary` for several points (#3560)
Nic-Ma Jan 1, 2022
9417ff2
[pre-commit.ci] pre-commit suggestions (#3568)
pre-commit-ci[bot] Jan 4, 2022
bf133e6
Improve NVTX Range Naming (#3484)
bhashemian Dec 14, 2021
64c69ca
Removes redundant casting -- ensure tuple (#3495)
wyli Dec 16, 2021
6ac68ce
avoid 60.0.0 (#3514)
wyli Dec 20, 2021
bf344c6
3525 Fix invertible issue in OneOf compose (#3530)
Nic-Ma Dec 22, 2021
1825acd
3539 Remove decollate warning (#3545)
Nic-Ma Dec 24, 2021
cb4dd49
[DLMED] enhance set_determinism (#3547)
Nic-Ma Dec 24, 2021
5bce120
skipping pretraining network loading when downloading is unsuccessful…
wyli Dec 31, 2021
8f317dd
[DLMED] fix mypy errors (#3562)
Nic-Ma Jan 1, 2022
a7835ab
3565 - adds metadata when loading dicom series (#3566)
wyli Jan 4, 2022
bb4ad5f
3580 - create codeql-analysis.yml (#3579)
wyli Jan 5, 2022
4bd13fe
498 Add logger_handler to LrScheduleHandler (#3570)
Nic-Ma Jan 5, 2022
cdcd524
update issue template for questions (#3597)
wyli Jan 6, 2022
91777d0
3581 Enhance `slice_channels` utility function (#3585)
Nic-Ma Jan 7, 2022
1913086
3600 Apply unused `kwargs` for 3rd party APIs in WSIReader (#3601)
Nic-Ma Jan 7, 2022
a93b8cc
conda environment yaml file (#3584)
rijobro Jan 7, 2022
3269851
3610 Add ascontiguous utility and check in CacheDataset (#3614)
Nic-Ma Jan 9, 2022
e163992
3609 - conda environment-dev.yml test (#3612)
wyli Jan 9, 2022
a0d2558
3616 test downloading issues (#3617)
wyli Jan 9, 2022
24404f4
[DLMED] fix dtype issue (#3625)
Nic-Ma Jan 10, 2022
3c6ee69
3628 0-d array to_contiguous (#3629)
wyli Jan 10, 2022
82d2f16
3559 Enhance `DatasetSummary` for several points (#3560)
Nic-Ma Jan 1, 2022
4687a0c
3565 - adds metadata when loading dicom series (#3566)
wyli Jan 4, 2022
c550810
498 Add logger_handler to LrScheduleHandler (#3570)
Nic-Ma Jan 5, 2022
fe1a259
3581 Enhance `slice_channels` utility function (#3585)
Nic-Ma Jan 7, 2022
a30f170
3600 Apply unused `kwargs` for 3rd party APIs in WSIReader (#3601)
Nic-Ma Jan 7, 2022
941f7d3
3610 Add ascontiguous utility and check in CacheDataset (#3614)
Nic-Ma Jan 9, 2022
5353d39
[DLMED] fix dtype issue (#3625)
Nic-Ma Jan 10, 2022
5807f5d
3628 0-d array to_contiguous (#3629)
wyli Jan 10, 2022
2fef7ff
3624 Add kwargs to torch APIs of utilities (#3631)
Nic-Ma Jan 10, 2022
83f8b06
capture windows permission error (#3633)
wyli Jan 11, 2022
be1c362
3632 Enhance GridPatchDataset for iterable API and data source (#3636)
Nic-Ma Jan 11, 2022
523a047
[DLMED] add missing doc-string (#3646)
Nic-Ma Jan 12, 2022
12955c5
3621 update get_rank calls (#3641)
wyli Jan 12, 2022
d58e234
update meshgrid (#3644)
wyli Jan 12, 2022
35c2b37
`EnsureChannelFirst`: avoid re-creation of `AddChannel` (#3649)
rijobro Jan 12, 2022
cd177c0
3578 Support single channel with OneHot format in KeepLargestConnecte…
Nic-Ma Jan 13, 2022
459b081
moveaxis and typing enhancements (#3648)
wyli Jan 13, 2022
f089659
3648 Enhance correct centers for crop (#3652)
Nic-Ma Jan 13, 2022
3a26702
option to occlude all channels simultaneously (#3543)
rijobro Jan 13, 2022
ceb71ab
removes duplicated docstring rendering (#3651)
wyli Jan 13, 2022
b9ef673
black with pip (#3653)
rijobro Jan 13, 2022
4ffe588
lint tests (#3656)
rijobro Jan 13, 2022
6092b4d
pytype working for macOS (#3657)
rijobro Jan 13, 2022
d646120
Delete testdata.nrrd (#3658)
wyli Jan 13, 2022
a8ff27c
3654 remove some flake8 errors ignore (#3659)
wyli Jan 14, 2022
a644b40
3661 Add default values to CopyItems transform (#3662)
Nic-Ma Jan 14, 2022
40257f5
[DLMED] add label transform (#3666)
Nic-Ma Jan 14, 2022
e0db5a5
install from/with conda (#3667)
rijobro Jan 14, 2022
b2cc166
DictPostFixes (#3671)
rijobro Jan 18, 2022
c327cfa
[DLMED] change to warning (#3675)
Nic-Ma Jan 18, 2022
c2b6459
3595 - adds a folder layout class (#3655)
wyli Jan 18, 2022
b4f8ff1
3672 Add more messages for AUC warning (#3676)
Nic-Ma Jan 19, 2022
2500edd
3670 Fix channel_dim in ITKReader and add in it NibabelReader (#3678)
Nic-Ma Jan 19, 2022
ab14b8b
Update make_nifti (#3682)
rijobro Jan 19, 2022
50c201c
support stack of images with channels (#3680)
wyli Jan 20, 2022
e96dcca
3686 Skip workflow run if data is empty or the specified epoch_length…
Nic-Ma Jan 21, 2022
a151ca8
capture windows permission error (#3633)
wyli Jan 11, 2022
c77c0b1
3632 Enhance GridPatchDataset for iterable API and data source (#3636)
Nic-Ma Jan 11, 2022
a9d6dde
[DLMED] add missing doc-string (#3646)
Nic-Ma Jan 12, 2022
fab01ae
3578 Support single channel with OneHot format in KeepLargestConnecte…
Nic-Ma Jan 13, 2022
ecc8489
3648 Enhance correct centers for crop (#3652)
Nic-Ma Jan 13, 2022
8888458
Delete testdata.nrrd (#3658)
wyli Jan 13, 2022
eb8d3b2
3661 Add default values to CopyItems transform (#3662)
Nic-Ma Jan 14, 2022
be10900
[DLMED] add label transform (#3666)
Nic-Ma Jan 14, 2022
8a25812
[DLMED] change to warning (#3675)
Nic-Ma Jan 18, 2022
967c643
3672 Add more messages for AUC warning (#3676)
Nic-Ma Jan 19, 2022
fe757be
3670 Fix channel_dim in ITKReader and add in it NibabelReader (#3678)
Nic-Ma Jan 19, 2022
821a7c5
support stack of images with channels (#3680)
wyli Jan 20, 2022
8d362c1
3686 Skip workflow run if data is empty or the specified epoch_length…
Nic-Ma Jan 21, 2022
25ecc28
remove tensor-array conversion in `Orientation` (#3687)
wyli Jan 21, 2022
24b1a8f
3695 Update CI to ignite 0.4.8 (#3696)
Nic-Ma Jan 21, 2022
32a045d
adds a dev mode collate for diagnostic info (#3684)
wyli Jan 21, 2022
3907cb4
add benchmarking warp against python itk (#3692)
kate-sann5100 Jan 24, 2022
6bb97bf
Fix differentiability of generalized Dice loss (#3619)
josafatburmeister Jan 24, 2022
c2018cf
compiler warnings (#3698)
wyli Jan 24, 2022
3180c27
try to fix #3621 (#3673)
wyli Jan 25, 2022
d36b835
3710 Add support to set delimiter for CSV files and change default to…
Nic-Ma Jan 25, 2022
927483b
[DLMED] fix mode issue (#3715)
Nic-Ma Jan 26, 2022
0996aab
fixes `grid_sample`, `interpolate` URLs (#3712)
wyli Jan 26, 2022
8227cae
3716 enhance doc-strings of DenseNet (#3717)
yiheng-wang-nv Jan 27, 2022
4aa3596
pip mypy (#3722)
rijobro Jan 27, 2022
b1a96c5
update bbox docstring (#3724)
wyli Jan 27, 2022
db61a08
3725 fixes download test (#3728)
wyli Jan 28, 2022
f08f1d3
Remove assert statement from non-test files (#3745)
deepsource-autofix[bot] Jan 30, 2022
ba9cd45
3750 skip setuptool 60.6 (#3751)
wyli Jan 31, 2022
19d5d8d
Update installation.md (#3758)
rijobro Feb 1, 2022
e0643ab
3753 Update PyTorch base docker to 22.01 (#3754)
Nic-Ma Feb 1, 2022
ec5c9d5
3620 intensity range percentiles (#3685)
wyli Feb 1, 2022
6f16823
3734 Enhance `CacheDataset` to avoid duplicated cache (#3739)
Nic-Ma Feb 1, 2022
397d511
3732 Update TTA module based on latest features (#3733)
Nic-Ma Feb 1, 2022
6c75550
3747 Enhance errors and docs according to Youtube feedback (#3759)
Nic-Ma Feb 3, 2022
5cce0af
3697 add spatial resample (#3701)
wyli Feb 3, 2022
c12a54a
Fix the TTA issue related to PadListCollate logic (#3762)
Nic-Ma Feb 4, 2022
d050ea5
Update offset assign (#3764)
wyli Feb 4, 2022
816f413
2823 enhance convert_data/dst_type typing (#3749)
wyli Feb 4, 2022
62b3f6e
3765 Enhance `create_multigpu_supervised_XXX` for distributed (#3768)
Nic-Ma Feb 4, 2022
b61db79
3763 Enhance the doc of `ThreadDataLoader` for `num_workers` (#3770)
Nic-Ma Feb 6, 2022
fa066fb
update blossom config (#3771)
wyli Feb 7, 2022
afcf593
3595 3766 adds a base writer and an itk writer (#3674)
wyli Feb 7, 2022
3c96cb8
revert workaround (#3778)
wyli Feb 8, 2022
a4a6c95
3769 Enhance logger logic of StatsHandler and DataStats (#3774)
Nic-Ma Feb 8, 2022
4d0baa0
3595 Adds nibabel/pil writers (#3772)
wyli Feb 8, 2022
70f8f66
remove tensor-array conversion in `Orientation` (#3687)
wyli Jan 21, 2022
95db697
3695 Update CI to ignite 0.4.8 (#3696)
Nic-Ma Jan 21, 2022
8d7944f
Fix differentiability of generalized Dice loss (#3619)
josafatburmeister Jan 24, 2022
84f25ff
compiler warnings (#3698)
wyli Jan 24, 2022
c3aea07
3710 Add support to set delimiter for CSV files and change default to…
Nic-Ma Jan 25, 2022
533d049
[DLMED] fix mode issue (#3715)
Nic-Ma Jan 26, 2022
c2dbd08
fixes `grid_sample`, `interpolate` URLs (#3712)
wyli Jan 26, 2022
7c748ae
3716 enhance doc-strings of DenseNet (#3717)
yiheng-wang-nv Jan 27, 2022
904a937
update bbox docstring (#3724)
wyli Jan 27, 2022
d41d45b
3750 skip setuptool 60.6 (#3751)
wyli Jan 31, 2022
df25a23
3734 Enhance `CacheDataset` to avoid duplicated cache (#3739)
Nic-Ma Feb 1, 2022
1d693a7
3747 Enhance errors and docs according to Youtube feedback (#3759)
Nic-Ma Feb 3, 2022
227bad0
3765 Enhance `create_multigpu_supervised_XXX` for distributed (#3768)
Nic-Ma Feb 4, 2022
0dbf06c
3763 Enhance the doc of `ThreadDataLoader` for `num_workers` (#3770)
Nic-Ma Feb 6, 2022
8db6307
3769 Enhance logger logic of StatsHandler and DataStats (#3774)
Nic-Ma Feb 8, 2022
f9e68d2
autoformatting
wyli Feb 9, 2022
4826824
tests workaround
wyli Feb 9, 2022
d391b16
3616 test downloading issues (#3617)
wyli Jan 9, 2022
d27ab6c
3725 fixes download test (#3728)
wyli Jan 28, 2022
7c9fefb
3744 Add `save_state` utility to handle saving logic (#3780)
Nic-Ma Feb 9, 2022
3785171
3744 Add `save_state` utility to handle saving logic (#3780)
Nic-Ma Feb 9, 2022
0be3341
3432 make vit support torchscript (#3782)
yiheng-wang-nv Feb 10, 2022
127e823
2620 3595 Writer backend selector, deprecating nifti_saver/writer, pn…
wyli Feb 10, 2022
e3a9730
[DLMED] add default value (#3785)
Nic-Ma Feb 11, 2022
e2fcb97
Add GPU-enabled function "get_largest_connected_component_mask" (#3677)
dongyang0122 Feb 11, 2022
2d35301
3795 Add `ensure_channel_first` to `LoadImage` (#3796)
Nic-Ma Feb 12, 2022
eb947e8
adds alias check (#3794)
wyli Feb 12, 2022
42603b3
3791 orientation xform warning spatial dims (#3792)
wyli Feb 13, 2022
817c1cf
Update blossom-ci allow list (#3800)
pxLi Feb 14, 2022
b39e7fa
change raise error to warnings for some metrics (#3804)
yiheng-wang-nv Feb 14, 2022
e21e6ee
following https://github.com/Project-MONAI/tutorials/issues/545 (#3802)
wyli Feb 14, 2022
b2bd500
3791 orientation xform warning spatial dims (#3792)
wyli Feb 13, 2022
4e09627
change raise error to warnings for some metrics (#3804)
yiheng-wang-nv Feb 14, 2022
3a08450
following https://github.com/Project-MONAI/tutorials/issues/545 (#3802)
wyli Feb 14, 2022
9ae183b
update changelog for v0.8.1 and prepare apps.mmar module for the upda…
wyli Feb 16, 2022
71ff399
update changelog for v0.8.1 and prepare apps.mmar module for the upda…
wyli Feb 16, 2022
046e625
Merge branch 'dev' into releasing/0.8.1
wyli Feb 16, 2022
894e989
3482 Add `ConfigComponent` for config parsing (#3720)
Nic-Ma Feb 18, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
3732 Update TTA module based on latest features (Project-MONAI#3733)
* [DLMED] refine TTA

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] enhance TTA

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] fix flake8

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] refine example in doc-string

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] update according to comments

Signed-off-by: Nic Ma <nma@nvidia.com>

* add to optimize tta

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>

* update

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>

* .item for np.ndarray too

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>

* [DLMED] fix flake8

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] update according to comments

Signed-off-by: Nic Ma <nma@nvidia.com>

Co-authored-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Co-authored-by: Wenqi Li <wenqil@nvidia.com>
3 people authored Feb 1, 2022
commit 397d511f53c1256cea7c5f085c0ae1d12620793d
118 changes: 50 additions & 68 deletions monai/data/test_time_augmentation.py
Original file line number Diff line number Diff line change
@@ -16,17 +16,16 @@
import numpy as np
import torch

from monai.config.type_definitions import NdarrayOrTensor
from monai.data.dataloader import DataLoader
from monai.data.dataset import Dataset
from monai.data.utils import list_data_collate, pad_list_data_collate
from monai.data.utils import decollate_batch, pad_list_data_collate
from monai.transforms.compose import Compose
from monai.transforms.inverse import InvertibleTransform
from monai.transforms.inverse_batch_transform import BatchInverseTransform
from monai.transforms.post.dictionary import Invertd
from monai.transforms.transform import Randomizable
from monai.transforms.utils import allow_missing_keys_mode, convert_inverse_interp_mode
from monai.utils.enums import CommonKeys, PostFix, TraceKeys
from monai.utils.module import optional_import
from monai.utils.type_conversion import convert_data_type
from monai.transforms.utils_pytorch_numpy_unification import mode, stack
from monai.utils import CommonKeys, PostFix, optional_import

if TYPE_CHECKING:
from tqdm import tqdm
@@ -80,19 +79,24 @@ class TestTimeAugmentation:
For example, to handle key `image`, read/write affine matrices from the
metadata `image_meta_dict` dictionary's `affine` field.
this arg only works when `meta_keys=None`.
return_full_data: normally, metrics are returned (mode, mean, std, vvc). Setting this flag to `True` will return the
full data. Dimensions will be same size as when passing a single image through `inferrer_fn`, with a dimension appended
equal in size to `num_examples` (N), i.e., `[N,C,H,W,[D]]`.
to_tensor: whether to convert the inverted data into PyTorch Tensor first, default to `True`.
output_device: if converted the inverted data to Tensor, move the inverted results to target device
before `post_func`, default to "cpu".
post_func: post processing for the inverted data, should be a callable function.
return_full_data: normally, metrics are returned (mode, mean, std, vvc). Setting this flag to `True`
will return the full data. Dimensions will be same size as when passing a single image through
`inferrer_fn`, with a dimension appended equal in size to `num_examples` (N), i.e., `[N,C,H,W,[D]]`.
progress: whether to display a progress bar.

Example:
.. code-block:: python

transform = RandAffined(keys, ...)
post_trans = Compose([Activations(sigmoid=True), AsDiscrete(threshold=0.5)])
model = UNet(...).to(device)
transform = Compose([RandAffined(keys, ...), ...])
transform.set_random_state(seed=123) # ensure deterministic evaluation

tt_aug = TestTimeAugmentation(
transform, batch_size=5, num_workers=0, inferrer_fn=lambda x: post_trans(model(x)), device=device
transform, batch_size=5, num_workers=0, inferrer_fn=model, device=device
)
mode, mean, std, vvc = tt_aug(test_data)
"""
@@ -109,6 +113,9 @@ def __init__(
nearest_interp: bool = True,
orig_meta_keys: Optional[str] = None,
meta_key_postfix=DEFAULT_POST_FIX,
to_tensor: bool = True,
output_device: Union[str, torch.device] = "cpu",
post_func: Callable = _identity,
return_full_data: bool = False,
progress: bool = True,
) -> None:
@@ -118,12 +125,20 @@ def __init__(
self.inferrer_fn = inferrer_fn
self.device = device
self.image_key = image_key
self.orig_key = orig_key
self.nearest_interp = nearest_interp
self.orig_meta_keys = orig_meta_keys
self.meta_key_postfix = meta_key_postfix
self.return_full_data = return_full_data
self.progress = progress
self._pred_key = CommonKeys.PRED
self.inverter = Invertd(
keys=self._pred_key,
transform=transform,
orig_keys=orig_key,
orig_meta_keys=orig_meta_keys,
meta_key_postfix=meta_key_postfix,
nearest_interp=nearest_interp,
to_tensor=to_tensor,
device=output_device,
post_func=post_func,
)

# check that the transform has at least one random component, and that all random transforms are invertible
self._check_transforms()
@@ -135,8 +150,8 @@ def _check_transforms(self):
invertibles = np.array([isinstance(t, InvertibleTransform) for t in ts])
# check at least 1 random
if sum(randoms) == 0:
raise RuntimeError(
"Requires a `Randomizable` transform or a `Compose` containing at least one `Randomizable` transform."
warnings.warn(
"TTA usually has at least a `Randomizable` transform or `Compose` contains `Randomizable` transforms."
)
# check that whenever randoms is True, invertibles is also true
for r, i in zip(randoms, invertibles):
@@ -147,18 +162,19 @@ def _check_transforms(self):

def __call__(
self, data: Dict[str, Any], num_examples: int = 10
) -> Union[Tuple[np.ndarray, np.ndarray, np.ndarray, float], np.ndarray]:
) -> Union[Tuple[NdarrayOrTensor, NdarrayOrTensor, NdarrayOrTensor, float], NdarrayOrTensor]:
"""
Args:
data: dictionary data to be processed.
num_examples: number of realisations to be processed and results combined.

Returns:
- if `return_full_data==False`: mode, mean, std, vvc. The mode, mean and standard deviation are calculated across
`num_examples` outputs at each voxel. The volume variation coefficient (VVC) is `std/mean` across the whole output,
including `num_examples`. See original paper for clarification.
- if `return_full_data==False`: data is returned as-is after applying the `inferrer_fn` and then concatenating across
the first dimension containing `num_examples`. This allows the user to perform their own analysis if desired.
- if `return_full_data==False`: mode, mean, std, vvc. The mode, mean and standard deviation are
calculated across `num_examples` outputs at each voxel. The volume variation coefficient (VVC)
is `std/mean` across the whole output, including `num_examples`. See original paper for clarification.
- if `return_full_data==False`: data is returned as-is after applying the `inferrer_fn` and then
concatenating across the first dimension containing `num_examples`. This allows the user to perform
their own analysis if desired.
"""
d = dict(data)

@@ -171,56 +187,22 @@ def __call__(
ds = Dataset(data_in, self.transform)
dl = DataLoader(ds, num_workers=self.num_workers, batch_size=self.batch_size, collate_fn=pad_list_data_collate)

transform_key = InvertibleTransform.trace_key(self.orig_key)

# create inverter
inverter = BatchInverseTransform(self.transform, dl, collate_fn=list_data_collate)

outputs: List[np.ndarray] = []
outs: List = []

for batch_data in tqdm(dl) if has_tqdm and self.progress else dl:

batch_images = batch_data[self.image_key].to(self.device)

# do model forward pass
batch_output = self.inferrer_fn(batch_images)
if isinstance(batch_output, torch.Tensor):
batch_output = batch_output.detach().cpu()
if isinstance(batch_output, np.ndarray):
batch_output = torch.Tensor(batch_output)
transform_info = batch_data.get(transform_key, None)
if transform_info is None:
# no invertible transforms, adding dummy info for identity invertible
transform_info = [[TraceKeys.NONE] for _ in range(self.batch_size)]
if self.nearest_interp:
transform_info = convert_inverse_interp_mode(
trans_info=deepcopy(transform_info), mode="nearest", align_corners=None
)
batch_data[self._pred_key] = self.inferrer_fn(batch_data[self.image_key].to(self.device))
outs.extend([self.inverter(i)[self._pred_key] for i in decollate_batch(batch_data)])

# create a dictionary containing the inferred batch and their transforms
inferred_dict = {self.orig_key: batch_output, transform_key: transform_info}
# if meta dict is present, add that too (required for some inverse transforms)
meta_dict_key = self.orig_meta_keys or f"{self.orig_key}_{self.meta_key_postfix}"
if meta_dict_key in batch_data:
inferred_dict[meta_dict_key] = batch_data[meta_dict_key]

# do inverse transformation (allow missing keys as only inverting the orig_key)
with allow_missing_keys_mode(self.transform): # type: ignore
inv_batch = inverter(inferred_dict)

# append
outputs.append(inv_batch[self.orig_key])

# output
output: np.ndarray = np.concatenate(outputs)
output: NdarrayOrTensor = stack(outs, 0)

if self.return_full_data:
return output

# calculate metrics
output_t, *_ = convert_data_type(output, output_type=torch.Tensor, dtype=np.int64)
mode: np.ndarray = np.asarray(torch.mode(output_t, dim=0).values) # type: ignore
mean: np.ndarray = np.mean(output, axis=0) # type: ignore
std: np.ndarray = np.std(output, axis=0) # type: ignore
vvc: float = (np.std(output) / np.mean(output)).item()
return mode, mean, std, vvc
_mode = mode(output, dim=0)
mean = output.mean(0)
std = output.std(0)
vvc = (output.std() / output.mean()).item()

return _mode, mean, std, vvc
2 changes: 2 additions & 0 deletions monai/transforms/__init__.py
Original file line number Diff line number Diff line change
@@ -562,11 +562,13 @@
isfinite,
isnan,
maximum,
mode,
moveaxis,
nonzero,
percentile,
ravel,
repeat,
stack,
unravel_index,
where,
)
33 changes: 32 additions & 1 deletion monai/transforms/utils_pytorch_numpy_unification.py
Original file line number Diff line number Diff line change
@@ -16,6 +16,7 @@

from monai.config.type_definitions import NdarrayOrTensor
from monai.utils.misc import ensure_tuple, is_module_ver_at_least
from monai.utils.type_conversion import convert_data_type, convert_to_dst_type

__all__ = [
"moveaxis",
@@ -37,6 +38,8 @@
"repeat",
"isnan",
"ascontiguousarray",
"stack",
"mode",
]


@@ -355,7 +358,7 @@ def isnan(x: NdarrayOrTensor) -> NdarrayOrTensor:

Args:
x: array/tensor

dim: dimension along which to stack
"""
if isinstance(x, np.ndarray):
return np.isnan(x)
@@ -378,3 +381,31 @@ def ascontiguousarray(x: NdarrayOrTensor, **kwargs) -> NdarrayOrTensor:
if isinstance(x, torch.Tensor):
return x.contiguous(**kwargs)
return x


def stack(x: Sequence[NdarrayOrTensor], dim: int) -> NdarrayOrTensor:
"""`np.stack` with equivalent implementation for torch.

Args:
x: array/tensor
dim: dimension along which to perform the stack (referred to as `axis` by numpy)
"""
if isinstance(x[0], np.ndarray):
return np.stack(x, dim) # type: ignore
return torch.stack(x, dim) # type: ignore


def mode(x: NdarrayOrTensor, dim: int = -1, to_long: bool = True) -> NdarrayOrTensor:
"""`torch.mode` with equivalent implementation for numpy.

Args:
x: array/tensor
dim: dimension along which to perform `mode` (referred to as `axis` by numpy)
to_long: convert input to long before performing mode.
"""
x_t: torch.Tensor
dtype = torch.int64 if to_long else None
x_t, *_ = convert_data_type(x, torch.Tensor, dtype=dtype) # type: ignore
o_t = torch.mode(x_t, dim).values
o, *_ = convert_to_dst_type(o_t, x)
return o
24 changes: 16 additions & 8 deletions tests/test_testtimeaugmentation.py
Original file line number Diff line number Diff line change
@@ -75,11 +75,12 @@ def tearDown(self) -> None:

def test_test_time_augmentation(self):
input_size = (20, 20)
device = "cuda" if torch.cuda.is_available() else "cpu"
keys = ["image", "label"]
num_training_ims = 10

train_data = self.get_data(num_training_ims, input_size)
test_data = self.get_data(1, input_size)
device = "cuda" if torch.cuda.is_available() else "cpu"

transforms = Compose(
[
@@ -125,21 +126,28 @@ def test_test_time_augmentation(self):

post_trans = Compose([Activations(sigmoid=True), AsDiscrete(threshold=0.5)])

def inferrer_fn(x):
return post_trans(model(x))

tt_aug = TestTimeAugmentation(transforms, batch_size=5, num_workers=0, inferrer_fn=inferrer_fn, device=device)
tt_aug = TestTimeAugmentation(
transform=transforms,
batch_size=5,
num_workers=0,
inferrer_fn=model,
device=device,
to_tensor=True,
output_device="cpu",
post_func=post_trans,
)
mode, mean, std, vvc = tt_aug(test_data)
self.assertEqual(mode.shape, (1,) + input_size)
self.assertEqual(mean.shape, (1,) + input_size)
self.assertTrue(all(np.unique(mode) == (0, 1)))
self.assertEqual((mean.min(), mean.max()), (0.0, 1.0))
self.assertGreaterEqual(mean.min(), 0.0)
self.assertLessEqual(mean.max(), 1.0)
self.assertEqual(std.shape, (1,) + input_size)
self.assertIsInstance(vvc, float)

def test_fail_non_random(self):
def test_warn_non_random(self):
transforms = Compose([AddChanneld("im"), SpatialPadd("im", 1)])
with self.assertRaises(RuntimeError):
with self.assertWarns(UserWarning):
TestTimeAugmentation(transforms, None, None, None)

def test_warn_random_but_has_no_invertible(self):
14 changes: 13 additions & 1 deletion tests/test_utils_pytorch_numpy_unification.py
Original file line number Diff line number Diff line change
@@ -13,11 +13,18 @@

import numpy as np
import torch
from parameterized import parameterized

from monai.transforms.utils_pytorch_numpy_unification import percentile
from monai.transforms.utils_pytorch_numpy_unification import mode, percentile
from monai.utils import set_determinism
from tests.utils import TEST_NDARRAYS, SkipIfBeforePyTorchVersion, assert_allclose

TEST_MODE = []
for p in TEST_NDARRAYS:
TEST_MODE.append([p(np.array([1, 2, 3, 4, 4, 5])), p(4), False])
TEST_MODE.append([p(np.array([3.1, 4.1, 4.1, 5.1])), p(4.1), False])
TEST_MODE.append([p(np.array([3.1, 4.1, 4.1, 5.1])), p(4), True])


class TestPytorchNumpyUnification(unittest.TestCase):
def setUp(self) -> None:
@@ -54,6 +61,11 @@ def test_dim(self):
atol = 0.5 if not hasattr(torch, "quantile") else 1e-4
assert_allclose(results[0], results[-1], type_test=False, atol=atol)

@parameterized.expand(TEST_MODE)
def test_mode(self, array, expected, to_long):
res = mode(array, to_long=to_long)
assert_allclose(res, expected)


if __name__ == "__main__":
unittest.main()