Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge master #466

Merged
merged 190 commits into from
Apr 19, 2023
Merged
Changes from 1 commit
Commits
Show all changes
190 commits
Select commit Hold shift + click to select a range
69463a6
update meta files; releasing 1.1
wyli Dec 19, 2022
97918e4
5776 Fixes integration (#5778)
wyli Dec 20, 2022
e3d1cc8
5793 Avoid using sphinx 6.0.0 (#5794)
yiheng-wang-nv Jan 4, 2023
4381e4b
'dimensions' -> 'spatial_dims' (#5792)
ShadowTwin41 Jan 4, 2023
c6081e5
5775 Fix `_get_latest_bundle_version` issue on Windows (#5787)
yiheng-wang-nv Jan 4, 2023
7512d07
[pre-commit.ci] pre-commit suggestions and fixes 5805 (#5797)
pre-commit-ci[bot] Jan 4, 2023
c5ea694
Add generic kernel transform with support for multiple kernels (#5317)
kbressem Jan 4, 2023
315d2d2
Utility function to warn changes in default arguments (#5738)
Shadow-Devil Jan 4, 2023
df6bc9c
5782 Flexible interp modes in regunet (#5807)
wyli Jan 5, 2023
eb6bbe8
auto updates (#5812)
monai-bot Jan 5, 2023
c7bc1bf
5624 5816 resume tests (#5814)
wyli Jan 6, 2023
2c2de9e
update the docstring for `HoVerNet` and `UpSample` (#5818)
KumoLiu Jan 6, 2023
9efd54d
5804 Simplify the configparser usage to get parsed attributes easily …
Shadow-Devil Jan 6, 2023
fe2ba78
Test pre-merge and integration tests (#5819)
YanxuanLiu Jan 6, 2023
8f136db
MyPy disallow untyped decorators (#5824)
Shadow-Devil Jan 9, 2023
d990ff5
auto updates (#5828)
monai-bot Jan 9, 2023
6060a47
5833 fixes Nifti1Image test cases (#5834)
wyli Jan 10, 2023
fdd07f3
Consistent usage of annotation syntax (#5836)
Shadow-Devil Jan 10, 2023
681f081
Upgrade pytorch image to 22.12 (#5820)
YanxuanLiu Jan 10, 2023
7eddd2d
Exclude cuCIM wrappers from get_transform_backends (#5838)
bhashemian Jan 11, 2023
4b464e7
5740 metrics reloaded support (#5741)
brudfors Jan 12, 2023
f14d50a
5844 improve metatensor slicing (#5845)
wyli Jan 13, 2023
33d41d7
Test premerge (#5829)
YanxuanLiu Jan 14, 2023
6cb4ced
Fix constructors for DenseNet derived classes (#5846)
pzaffino Jan 14, 2023
734d4ff
5852 fixes partial callable config parser (#5854)
wyli Jan 17, 2023
373e47d
Add ITK to the list of optional dependencies (#5858)
dzenanz Jan 17, 2023
6803061
1840 add tbf layer (#5757)
faebstn96 Jan 17, 2023
79b8b0c
use untyped_storage if present (#5863)
rijobro Jan 17, 2023
bdf5e1e
Fix instantiate for object instantiation with attribute `path` (#5866)
surajpaib Jan 18, 2023
2c5c89f
Add warning in `RandHistogramShift` (#5877)
KumoLiu Jan 19, 2023
5c778ad
5853: GEGLU activation function for the MLP Block. (#5856)
Shadow-Devil Jan 19, 2023
2d0e021
5867 add tjbf layer (#5876)
faebstn96 Jan 20, 2023
f266ea5
Disallow incomplete defs in visualize module (#5885)
Shadow-Devil Jan 23, 2023
86a06f3
workaround 5882 reenable weekly preview release (#5887)
wyli Jan 23, 2023
cce2063
5883 review requirements-dev.txt, docs/requirements.txt (#5888)
wyli Jan 23, 2023
e279463
add rand identity (#5890)
rijobro Jan 23, 2023
af0779c
5869 adds support of `_mode_` keyword in `instantiate` (#5880)
wyli Jan 23, 2023
a5a3e4e
Complete partially typed signatures (monai.utils) (#5891)
Shadow-Devil Jan 23, 2023
5a6cffe
fix dints bug (#5879)
dongyang0122 Jan 24, 2023
86c7ecd
make path user expansion optional (#5892)
colobas Jan 24, 2023
59593bb
removes APIs that are deprecated since 0.9 (#5901)
wyli Jan 26, 2023
8a5a550
warning about the upcoming change of default `image_only=True` in `L…
wyli Jan 27, 2023
ee38882
Fix SSIMLoss when batchsize bigger than 2 (#5908)
jak0bw Jan 27, 2023
50c3f32
auto updates (#5907)
monai-bot Jan 27, 2023
1a018a7
5849 Add metrics reloaded handler (#5920)
yiheng-wang-nv Jan 31, 2023
968878a
auto updates (#5927)
monai-bot Feb 1, 2023
b592164
4855 5860 update the pending transform utilities (#5916)
wyli Feb 1, 2023
e90bb84
5919 unify output tensor device for multiple metrics (#5924)
yiheng-wang-nv Feb 1, 2023
0d38e4b
Disallow incomplete defs in optimizers module (#5928)
Shadow-Devil Feb 1, 2023
3c8f6c6
5919 fix generalized dice issue (#5929)
yiheng-wang-nv Feb 2, 2023
a77de4a
5931 Fix config parsing issue for substring reference (#5932)
Nic-Ma Feb 3, 2023
f9842c0
Fully type annotate partially typed functions (#5940)
Shadow-Devil Feb 5, 2023
25518f2
auto updates (#5941)
monai-bot Feb 6, 2023
385ac0b
Fully type annotate partially typed functions in bundle, metrics and …
Shadow-Devil Feb 6, 2023
58c256e
Revert: New type hint in Inferer implies additional restriction (#5949)
Shadow-Devil Feb 8, 2023
3a3f6c4
5935 fixes spacing pixdim inplace change (#5950)
wyli Feb 8, 2023
cdac033
Set mypy warn_unused_ignores to False (#5956)
Shadow-Devil Feb 8, 2023
82cd668
5959 uses logging config at module level (#5960)
wyli Feb 8, 2023
94feae5
5998 improves error messages in monai/transforms (#5961)
wyli Feb 9, 2023
5657b8f
5762 pprint head and tail bundle script (#5969)
wyli Feb 10, 2023
de2b48c
Added callable options for iteration_log and epoch_log in StatsHandle…
vfdev-5 Feb 10, 2023
71abf1b
5902 allow for missing filename_or_obj key (#5980)
wyli Feb 13, 2023
3122e1a
5983 update to use np.linalg for the small affine inverse (#5967)
wyli Feb 13, 2023
68dba06
5971-fix the pixelshuffle upsample shape mismatch problem. (#5982)
binliunls Feb 13, 2023
79e272d
add pytorch 22.11 to test matrix (#5953)
YanxuanLiu Feb 14, 2023
16b8538
5957 update doc urls (#5992)
wyli Feb 14, 2023
f6c6413
5652 `random_size=False` will be the new default for the rand croppin…
wyli Feb 14, 2023
e089946
5978 test time configurable algo template path or url (#5979)
wyli Feb 14, 2023
9ddb14c
CI: Add dependabot workflow to update github actions (#5995)
jamesobutler Feb 15, 2023
d41c002
Bump actions/checkout from 2 to 3 (#6000)
dependabot[bot] Feb 15, 2023
54f4cfc
Bump actions/upload-artifact from 2 to 3 (#6004)
dependabot[bot] Feb 15, 2023
f4902b2
transforms to have multi-sample trait (#6003)
wyli Feb 15, 2023
94e9e17
6007 reverse_indexing for PILReader (#6008)
wyli Feb 16, 2023
11745a6
Improve sliding-window inference (#6009)
dongyang0122 Feb 16, 2023
d44c7cb
5991 add utility enhancements for lazy resampling (#6017)
yiheng-wang-nv Feb 17, 2023
bf55f22
6018 fixes dev docker build (#6019)
wyli Feb 17, 2023
f5708ea
Fix `CheckpointSaver` log error (#6026)
KumoLiu Feb 18, 2023
2a8c8cd
Expose ITK Image to MONAI MetaTensor conversion (#5897)
Shadow-Devil Feb 20, 2023
88fb0b1
Added callable options for iteration_log and epoch_log in TensorBoard…
vfdev-5 Feb 20, 2023
66422f8
6033 keep the gradient in sliding window utility when possible (#6034)
wyli Feb 21, 2023
f1bc378
Patch inference (#5914)
bhashemian Feb 22, 2023
aaab2ca
Improve GPU utilization rate of DiNTS network (#6040)
dongyang0122 Feb 22, 2023
3eef61e
Fix unused arg in `SlidingPatchWSIDataset` (#6047)
bhashemian Feb 22, 2023
b9e17e8
Improve GPU utilization of dints network (#6050)
dongyang0122 Feb 23, 2023
69d807a
add pad transforms with unit tests for lazy resampling (#6031)
KumoLiu Feb 24, 2023
d7a8774
5991 Enable lazy resampling for SpatialResample (#6060)
yiheng-wang-nv Feb 24, 2023
c731aaa
include citation (#6063)
rijobro Feb 24, 2023
f519699
update statshandler logging message (#6051)
wyli Feb 25, 2023
26f100a
auto updates (#6070)
monai-bot Feb 27, 2023
6d19992
Add affine_lps_to_ras argument to NrrdReader (#6074)
johannesu Feb 27, 2023
b30ea42
6028 6041 Upgrade ignite dependency to 0.4.11 (#6067)
Nic-Ma Feb 27, 2023
00c5c73
6069 Add default metadata and logging values for bundle run (#6072)
yiheng-wang-nv Feb 27, 2023
c2fc083
add crop transforms with unit tests for lazy resampling (#6068)
KumoLiu Feb 28, 2023
ab800d8
6066 pad mode (#6076)
wyli Feb 28, 2023
68074f0
auto updates (#6085)
monai-bot Mar 1, 2023
579fe65
GridPatch with both count and threshold filtering (#6055)
bhashemian Mar 1, 2023
7baf282
add remaining crop transforms with unit tests for lazy resampling (#6…
KumoLiu Mar 1, 2023
c8284d6
6048 deprecate resample=True in SaveImage (#6091)
wyli Mar 2, 2023
934ba09
Remove redundant array copy for WSITiffFileReader (#6089)
bhashemian Mar 2, 2023
958aac7
Remove del patches (#6095)
bhashemian Mar 2, 2023
10faf46
6086 6087 nan to indicate no_channel, split dim singleton (#6090)
wyli Mar 2, 2023
e375f2a
auto updates (#6102)
monai-bot Mar 6, 2023
fa884a2
6104 remove deprecated tensor.storage usage (#6105)
wyli Mar 6, 2023
b85e2c6
6020 avoid creating cufile.log when `import monai` (#6106)
wyli Mar 7, 2023
354c9c2
#6094 - add `warn` flag to RandCropByLabelClasses (#6121)
kbressem Mar 9, 2023
d118aa7
Add single if statement for warn (#6130)
kbressem Mar 11, 2023
f754928
Add ClearMLHandler to track all MONAI Experiments (#6013)
skinan Mar 12, 2023
a8302ec
6109 no mutate ratio /user inputs croppad (#6127)
wyli Mar 13, 2023
0a904fb
6124-add-training-attribute-check (#6132)
binliunls Mar 14, 2023
589c711
Improve GPU memory efficiency on sliding-window inferer (#6140)
dongyang0122 Mar 14, 2023
01b6d70
WSIReader defaults and tensor conversion (#6058)
bhashemian Mar 14, 2023
5e3b133
5991 Support more spatial transforms to use lazy resampling (#6080)
yiheng-wang-nv Mar 14, 2023
93a77b7
Use lower wsi level for testing (#6145)
bhashemian Mar 14, 2023
9fd6d4c
improve SpacingD output shape compute stability (#6126)
wyli Mar 15, 2023
c696773
5821 Add interface for bundle workflows (#5822)
Nic-Ma Mar 15, 2023
778d0aa
Fix "fast_training_tutorial.ipynb" (#6150)
KumoLiu Mar 15, 2023
af46d7b
Update Auto3DSeg ALGO_HASH (#5973)
mingxin-zheng Mar 15, 2023
6a113e6
auto updates (#6153)
monai-bot Mar 15, 2023
678b512
DataAnalyzer enhancements (#6131)
myron Mar 16, 2023
66d0478
6136 6146 update the default writer flag (#6147)
wyli Mar 16, 2023
5ce8a10
more efficient Dice metrics for large num_class (#6163)
wyli Mar 17, 2023
baa17a8
update release tests
wyli Mar 18, 2023
81b25ae
auto updates (#6176)
monai-bot Mar 20, 2023
eed660c
add nnunetv2 runner class (#5987)
dongyang0122 Mar 20, 2023
f183a5e
Update ALGO_HASH to make Auto3DSeg dints support older version of PyT…
mingxin-zheng Mar 21, 2023
843da82
auto updates/misc integration fixes (#6211)
monai-bot Mar 21, 2023
d57bff6
[Fix] Solve for ClearML Test issues. (#6212)
skinan Mar 21, 2023
0a45777
5915-add-export-trt-api (#5986)
binliunls Mar 21, 2023
554b172
#6213 Prepend `"meta"` to `MetaTensor.__repr__` and `MetaTensor.__str…
MathijsdeBoer Mar 21, 2023
47f8110
6216 fixes tensorrt test cases (#6217)
wyli Mar 22, 2023
76ade87
DiceLoss small optimization (#6218)
myron Mar 22, 2023
b8f158b
Auto3Dseg datalist folds enhancements (#6204)
myron Mar 22, 2023
4f8bc59
Enble swinunetr-v2 (#6203)
heyufan1995 Mar 22, 2023
104a360
6166 drop py37 and torch 1.8 (#6227)
wyli Mar 23, 2023
1cd0d7b
upgrade pytorch version (#6228)
YanxuanLiu Mar 23, 2023
b87375f
4855 lazy resampling impl -- Compose (#5860)
wyli Mar 23, 2023
c9889ce
6219 fixes type annotations for tensorrt (#6229)
wyli Mar 24, 2023
4ed5532
Add SomeOf transform composer (#6143)
tuanchien Mar 24, 2023
52e1edd
auto updates and integration test fixes (#6232)
monai-bot Mar 24, 2023
c885460
#6188 Allow user to define own `FolderLayout` in `SaveImage` and `Sav…
MathijsdeBoer Mar 24, 2023
c0aace5
Add transform to handle empty box as training data (#6170)
Can-Zhao Mar 25, 2023
600621d
Ignore none json/yaml config for bundleAgo (#6235)
heyufan1995 Mar 25, 2023
795bf61
auto updates (#6236)
monai-bot Mar 25, 2023
8eceabf
support keep_size=True in lazy Zoom (#6240)
wyli Mar 27, 2023
be3d138
Add RankFilter to skip logging when the rank is not meeting criteria …
mingxin-zheng Mar 28, 2023
e2f9d51
support GPU tensor for `GridPatch` and `GridPatchDataset` (#6246)
qingpeng9802 Mar 29, 2023
45a398a
warn or raise ValueError on duplicated key in json/yaml config (#6252)
kretes Mar 29, 2023
e4d48f0
Improve Compose encapsulation (#6224)
atbenmurray Mar 30, 2023
1b0808d
upgrade pytorch to 23.03 (#6256)
YanxuanLiu Mar 30, 2023
8470454
pipeline release upload with py39
wyli Mar 31, 2023
1005eac
InstanceNorm3dNVFuser fixes (#6266)
myron Apr 2, 2023
05533ab
Autorunner clear gpu mem allocatoins of dataanalyzer (#6270)
myron Apr 2, 2023
9b4c235
auto updates (#6273)
monai-bot Apr 3, 2023
6aa4f90
5821 Enhance bundle CLI entry for different bundle workflows (#6181)
Nic-Ma Apr 3, 2023
129c097
Remove deprecated WSIReader (#6262)
bhashemian Apr 3, 2023
bb4df37
6268 enhance hovernet load pretrained function (#6269)
yiheng-wang-nv Apr 3, 2023
9e33ff2
feat(SABlock): access to the attn matrix (#6271)
a-parida12 Apr 3, 2023
aa326bd
[pre-commit.ci] pre-commit suggestions (#6286)
pre-commit-ci[bot] Apr 4, 2023
9b09ed6
Add onnx export with verification (#6237)
liqunfu Apr 5, 2023
fa05609
Auto3DSeg skip trained algos (#6290)
myron Apr 5, 2023
629a758
Revert "Auto3DSeg skip trained algos" (#6295)
wyli Apr 5, 2023
d5258cc
Enable algorithm skip in bundle_gen (#6239)
heyufan1995 Apr 5, 2023
f98f0fd
6193 add a buffer_step option to the sliding windows util/speed up (#…
wyli Apr 5, 2023
332bfc0
ClassesToIndicesd limit the cache memory (#6284)
myron Apr 6, 2023
1832f95
Bug-fix in AlgoEnsembleBestN and AlgoEnsembleBestbyFold (#6300)
ValentinaVisani Apr 6, 2023
06defb7
onnx export to support older pytorch with example_outputs argument (#…
liqunfu Apr 6, 2023
e4b313d
Auto3DSeg continue training (skip trained algos) (#6310)
myron Apr 6, 2023
9ef42ff
auto updates (#6324)
monai-bot Apr 11, 2023
0a29bc1
WSIReader read by power and mpp (#6244)
bhashemian Apr 11, 2023
d14e8f0
feat(SABlock): access atten matrix jit compliant (#6308)
a-parida12 Apr 11, 2023
fa7411a
add `device` in `HoVerNetNuclearTypePostProcessing` and `HoVerNetInst…
KumoLiu Apr 11, 2023
5beeda5
6258-add-onnx-option-to-trt-export (#6274)
binliunls Apr 11, 2023
6a7f35b
Adding support for Multi-GPU data analyzer #6182 (#6202)
heyufan1995 Apr 11, 2023
6b7b1e7
auto updates and merging 6339/6341/6346 (#6344)
monai-bot Apr 12, 2023
ef1285a
added spacing to surface distances calculations (#6144)
gasperpodobnik Apr 12, 2023
21cef45
Generate fake/simulated images in Auto3DSeg tests based on mGPU test …
mingxin-zheng Apr 12, 2023
1a55ba5
Merge branch 'releasing/1.2.0' into dev
wyli Apr 12, 2023
d8d887f
fixes integration test and 6354 (#6353)
wyli Apr 13, 2023
57c618c
Add text to vision embedding (#6282)
tangy5 Apr 13, 2023
3633b1c
SlidingWindowInfererAdapt (#6251)
myron Apr 14, 2023
825b8db
Initial version for multinode auto_runner and ensembler (#6272)
heyufan1995 Apr 14, 2023
7bb74c6
`int` -> `float` type annotation for LRFinder (#6364)
joshestein Apr 14, 2023
c5b1127
Update ALGO_HASH/skip slow tests 6359 (#6367)
mingxin-zheng Apr 14, 2023
888ad2f
Ensure ensemble preds are on the same device (#6368)
myron Apr 15, 2023
b356fec
Fix cuda defined in train_params bug (#6370)
heyufan1995 Apr 15, 2023
e18097d
auto updates (#6374)
monai-bot Apr 17, 2023
30aa410
Add track_meta option for Lambda and derived transforms (#6385)
surajpaib Apr 18, 2023
3881e45
Update ALGO_HASH (#6384)
mingxin-zheng Apr 18, 2023
c30a5b9
6371 enhance warning messages when modifying applied operations (#6372)
wyli Apr 18, 2023
d8eb68a
5821 6303 Optimize MonaiAlgo FL based on BundleWorkflow (#6158)
Nic-Ma Apr 19, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
added spacing to surface distances calculations (Project-MONAI#6144)
Fixes Project-MONAI#6137 .

### Description

A user can now pass a `spacing` parameter to surface distance metrics
(Hausdorff distance, surface distance, surface dice), so that correct
results can be obtained for images with non-isotropic spacing (e.g.
(0.5, 0.5, 2) mm).
If `spacing` is a sequence, it must be of length equal to the image
dimensions; if a single number, this spacing is used for all axes. If
``None``, spacing of unity is used. Defaults to ``None``so that current
behaviour is preserved.

### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [x] Non-breaking change (fix or new feature that would not break
existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing
functionality to change).
- [x] New tests added to cover the changes.
- [ ] Integration tests passed locally by running `./runtests.sh -f -u
--net --coverage`.
- [ ] Quick tests passed locally by running `./runtests.sh --quick
--unittests --disttests`.
- [x] In-line docstrings updated.
- [x] Documentation updated, tested `make html` command in the `docs/`
folder.

---------

Signed-off-by: gasperp <gasper.podobnik@gmail.com>
gasperpodobnik authored Apr 12, 2023
commit ef1285a2a382aa1dbc4faee32d772abb4b048765
52 changes: 46 additions & 6 deletions monai/metrics/hausdorff_distance.py
Original file line number Diff line number Diff line change
@@ -12,11 +12,19 @@
from __future__ import annotations

import warnings
from collections.abc import Sequence
from typing import Any

import numpy as np
import torch

from monai.metrics.utils import do_metric_reduction, get_mask_edges, get_surface_distance, ignore_background
from monai.metrics.utils import (
do_metric_reduction,
get_mask_edges,
get_surface_distance,
ignore_background,
prepare_spacing,
)
from monai.utils import MetricReduction, convert_data_type

from .metric import CumulativeIterationMetric
@@ -70,21 +78,32 @@ def __init__(
self.reduction = reduction
self.get_not_nans = get_not_nans

def _compute_tensor(self, y_pred: torch.Tensor, y: torch.Tensor) -> torch.Tensor: # type: ignore[override]
def _compute_tensor(self, y_pred: torch.Tensor, y: torch.Tensor, **kwargs: Any) -> torch.Tensor: # type: ignore[override]
"""
Args:
y_pred: input data to compute, typical segmentation model output.
It must be one-hot format and first dim is batch, example shape: [16, 3, 32, 32]. The values
should be binarized.
y: ground truth to compute the distance. It must be one-hot format and first dim is batch.
The values should be binarized.
kwargs: additional parameters, e.g. ``spacing`` should be passed to correctly compute the metric.
``spacing``: spacing of pixel (or voxel). This parameter is relevant only
if ``distance_metric`` is set to ``"euclidean"``.
If a single number, isotropic spacing with that value is used for all images in the batch. If a sequence of numbers,
the length of the sequence must be equal to the image dimensions.
This spacing will be used for all images in the batch.
If a sequence of sequences, the length of the outer sequence must be equal to the batch size.
If inner sequence has length 1, isotropic spacing with that value is used for all images in the batch,
else the inner sequence length must be equal to the image dimensions. If ``None``, spacing of unity is used
for all images in batch. Defaults to ``None``.

Raises:
ValueError: when `y_pred` has less than three dimensions.
"""
dims = y_pred.ndimension()
if dims < 3:
raise ValueError("y_pred should have at least three dimensions.")

# compute (BxC) for each channel for each batch
return compute_hausdorff_distance(
y_pred=y_pred,
@@ -93,6 +112,7 @@ def _compute_tensor(self, y_pred: torch.Tensor, y: torch.Tensor) -> torch.Tensor
distance_metric=self.distance_metric,
percentile=self.percentile,
directed=self.directed,
spacing=kwargs.get("spacing"),
)

def aggregate(
@@ -123,6 +143,7 @@ def compute_hausdorff_distance(
distance_metric: str = "euclidean",
percentile: float | None = None,
directed: bool = False,
spacing: int | float | np.ndarray | Sequence[int | float | np.ndarray | Sequence[int | float]] | None = None,
) -> torch.Tensor:
"""
Compute the Hausdorff distance.
@@ -141,6 +162,13 @@ def compute_hausdorff_distance(
percentile of the Hausdorff Distance rather than the maximum result will be achieved.
Defaults to ``None``.
directed: whether to calculate directed Hausdorff distance. Defaults to ``False``.
spacing: spacing of pixel (or voxel). This parameter is relevant only if ``distance_metric`` is set to ``"euclidean"``.
If a single number, isotropic spacing with that value is used for all images in the batch. If a sequence of numbers,
the length of the sequence must be equal to the image dimensions. This spacing will be used for all images in the batch.
If a sequence of sequences, the length of the outer sequence must be equal to the batch size.
If inner sequence has length 1, isotropic spacing with that value is used for all images in the batch,
else the inner sequence length must be equal to the image dimensions. If ``None``, spacing of unity is used
for all images in batch. Defaults to ``None``.
"""

if not include_background:
@@ -153,30 +181,42 @@ def compute_hausdorff_distance(

batch_size, n_class = y_pred.shape[:2]
hd = np.empty((batch_size, n_class))

img_dim = y_pred.ndim - 2
spacing_list = prepare_spacing(spacing=spacing, batch_size=batch_size, img_dim=img_dim)

for b, c in np.ndindex(batch_size, n_class):
(edges_pred, edges_gt) = get_mask_edges(y_pred[b, c], y[b, c])
if not np.any(edges_gt):
warnings.warn(f"the ground truth of class {c} is all 0, this may result in nan/inf distance.")
if not np.any(edges_pred):
warnings.warn(f"the prediction of class {c} is all 0, this may result in nan/inf distance.")

distance_1 = compute_percent_hausdorff_distance(edges_pred, edges_gt, distance_metric, percentile)
distance_1 = compute_percent_hausdorff_distance(
edges_pred, edges_gt, distance_metric, percentile, spacing_list[b]
)
if directed:
hd[b, c] = distance_1
else:
distance_2 = compute_percent_hausdorff_distance(edges_gt, edges_pred, distance_metric, percentile)
distance_2 = compute_percent_hausdorff_distance(
edges_gt, edges_pred, distance_metric, percentile, spacing_list[b]
)
hd[b, c] = max(distance_1, distance_2)
return convert_data_type(hd, output_type=torch.Tensor, device=y_pred.device, dtype=torch.float)[0]


def compute_percent_hausdorff_distance(
edges_pred: np.ndarray, edges_gt: np.ndarray, distance_metric: str = "euclidean", percentile: float | None = None
edges_pred: np.ndarray,
edges_gt: np.ndarray,
distance_metric: str = "euclidean",
percentile: float | None = None,
spacing: int | float | np.ndarray | Sequence[int | float] | None = None,
) -> float:
"""
This function is used to compute the directed Hausdorff distance.
"""

surface_distance = get_surface_distance(edges_pred, edges_gt, distance_metric=distance_metric)
surface_distance = get_surface_distance(edges_pred, edges_gt, distance_metric=distance_metric, spacing=spacing)

# for both pred and gt do not have foreground
if surface_distance.shape == (0,):
4 changes: 3 additions & 1 deletion monai/metrics/loss_metric.py
Original file line number Diff line number Diff line change
@@ -11,6 +11,8 @@

from __future__ import annotations

from typing import Any

import torch
from torch.nn.modules.loss import _Loss

@@ -92,7 +94,7 @@ def aggregate(
f, not_nans = do_metric_reduction(data, reduction or self.reduction)
return (f, not_nans) if self.get_not_nans else f

def _compute_tensor(self, y_pred: torch.Tensor, y: torch.Tensor | None = None) -> TensorOrList:
def _compute_tensor(self, y_pred: torch.Tensor, y: torch.Tensor | None = None, **kwargs: Any) -> TensorOrList:
"""
Input `y_pred` is compared with ground truth `y`.
Both `y_pred` and `y` are expected to be a batch-first Tensor (BC[HWD]).
23 changes: 14 additions & 9 deletions monai/metrics/metric.py
Original file line number Diff line number Diff line change
@@ -49,7 +49,7 @@ class IterationMetric(Metric):
"""

def __call__(
self, y_pred: TensorOrList, y: TensorOrList | None = None
self, y_pred: TensorOrList, y: TensorOrList | None = None, **kwargs: Any
) -> torch.Tensor | Sequence[torch.Tensor | Sequence[torch.Tensor]]:
"""
Execute basic computation for model prediction `y_pred` and ground truth `y` (optional).
@@ -60,6 +60,7 @@ def __call__(
or a `batch-first` Tensor.
y: the ground truth to compute, must be a list of `channel-first` Tensor
or a `batch-first` Tensor.
kwargs: additional parameters for specific metric computation logic (e.g. ``spacing`` for SurfaceDistanceMetric, etc.).

Returns:
The computed metric values at the iteration level.
@@ -69,15 +70,15 @@ def __call__(
"""
# handling a list of channel-first data
if isinstance(y_pred, (list, tuple)) or isinstance(y, (list, tuple)):
return self._compute_list(y_pred, y)
return self._compute_list(y_pred, y, **kwargs)
# handling a single batch-first data
if isinstance(y_pred, torch.Tensor):
y_ = y.detach() if isinstance(y, torch.Tensor) else None
return self._compute_tensor(y_pred.detach(), y_)
return self._compute_tensor(y_pred.detach(), y_, **kwargs)
raise ValueError("y_pred or y must be a list/tuple of `channel-first` Tensors or a `batch-first` Tensor.")

def _compute_list(
self, y_pred: TensorOrList, y: TensorOrList | None = None
self, y_pred: TensorOrList, y: TensorOrList | None = None, **kwargs: Any
) -> torch.Tensor | list[torch.Tensor | Sequence[torch.Tensor]]:
"""
Execute the metric computation for `y_pred` and `y` in a list of "channel-first" tensors.
@@ -93,9 +94,12 @@ def _compute_list(
Note: subclass may enhance the operation to have multi-thread support.
"""
if y is not None:
ret = [self._compute_tensor(p.detach().unsqueeze(0), y_.detach().unsqueeze(0)) for p, y_ in zip(y_pred, y)]
ret = [
self._compute_tensor(p.detach().unsqueeze(0), y_.detach().unsqueeze(0), **kwargs)
for p, y_ in zip(y_pred, y)
]
else:
ret = [self._compute_tensor(p_.detach().unsqueeze(0), None) for p_ in y_pred]
ret = [self._compute_tensor(p_.detach().unsqueeze(0), None, **kwargs) for p_ in y_pred]

# concat the list of results (e.g. a batch of evaluation scores)
if isinstance(ret[0], torch.Tensor):
@@ -106,7 +110,7 @@ def _compute_list(
return ret

@abstractmethod
def _compute_tensor(self, y_pred: torch.Tensor, y: torch.Tensor | None = None) -> TensorOrList:
def _compute_tensor(self, y_pred: torch.Tensor, y: torch.Tensor | None = None, **kwargs: Any) -> TensorOrList:
"""
Computation logic for `y_pred` and `y` of an iteration, the data should be "batch-first" Tensors.
A subclass should implement its own computation logic.
@@ -318,7 +322,7 @@ class CumulativeIterationMetric(Cumulative, IterationMetric):
"""

def __call__(
self, y_pred: TensorOrList, y: TensorOrList | None = None
self, y_pred: TensorOrList, y: TensorOrList | None = None, **kwargs: Any
) -> torch.Tensor | Sequence[torch.Tensor | Sequence[torch.Tensor]]:
"""
Execute basic computation for model prediction and ground truth.
@@ -331,12 +335,13 @@ def __call__(
or a `batch-first` Tensor.
y: the ground truth to compute, must be a list of `channel-first` Tensor
or a `batch-first` Tensor.
kwargs: additional parameters for specific metric computation logic (e.g. ``spacing`` for SurfaceDistanceMetric, etc.).

Returns:
The computed metric values at the iteration level. The output shape should be
a `batch-first` tensor (BC[HWD]) or a list of `batch-first` tensors.
"""
ret = super().__call__(y_pred=y_pred, y=y)
ret = super().__call__(y_pred=y_pred, y=y, **kwargs)
if isinstance(ret, (tuple, list)):
self.extend(*ret)
else:
42 changes: 38 additions & 4 deletions monai/metrics/surface_dice.py
Original file line number Diff line number Diff line change
@@ -12,11 +12,19 @@
from __future__ import annotations

import warnings
from collections.abc import Sequence
from typing import Any

import numpy as np
import torch

from monai.metrics.utils import do_metric_reduction, get_mask_edges, get_surface_distance, ignore_background
from monai.metrics.utils import (
do_metric_reduction,
get_mask_edges,
get_surface_distance,
ignore_background,
prepare_spacing,
)
from monai.utils import MetricReduction, convert_data_type

from .metric import CumulativeIterationMetric
@@ -67,13 +75,23 @@ def __init__(
self.reduction = reduction
self.get_not_nans = get_not_nans

def _compute_tensor(self, y_pred: torch.Tensor, y: torch.Tensor) -> torch.Tensor: # type: ignore[override]
def _compute_tensor(self, y_pred: torch.Tensor, y: torch.Tensor, **kwargs: Any) -> torch.Tensor: # type: ignore[override]
r"""
Args:
y_pred: Predicted segmentation, typically segmentation model output.
It must be a one-hot encoded, batch-first tensor [B,C,H,W].
y: Reference segmentation.
It must be a one-hot encoded, batch-first tensor [B,C,H,W].
kwargs: additional parameters, e.g. ``spacing`` should be passed to correctly compute the metric.
``spacing``: spacing of pixel (or voxel). This parameter is relevant only
if ``distance_metric`` is set to ``"euclidean"``.
If a single number, isotropic spacing with that value is used for all images in the batch. If a sequence of numbers,
the length of the sequence must be equal to the image dimensions.
This spacing will be used for all images in the batch.
If a sequence of sequences, the length of the outer sequence must be equal to the batch size.
If inner sequence has length 1, isotropic spacing with that value is used for all images in the batch,
else the inner sequence length must be equal to the image dimensions. If ``None``, spacing of unity is used
for all images in batch. Defaults to ``None``.

Returns:
Pytorch Tensor of shape [B,C], containing the NSD values :math:`\operatorname {NSD}_{b,c}` for each batch
@@ -85,6 +103,7 @@ def _compute_tensor(self, y_pred: torch.Tensor, y: torch.Tensor) -> torch.Tensor
class_thresholds=self.class_thresholds,
include_background=self.include_background,
distance_metric=self.distance_metric,
spacing=kwargs.get("spacing"),
)

def aggregate(
@@ -117,6 +136,7 @@ def compute_surface_dice(
class_thresholds: list[float],
include_background: bool = False,
distance_metric: str = "euclidean",
spacing: int | float | np.ndarray | Sequence[int | float | np.ndarray | Sequence[int | float]] | None = None,
) -> torch.Tensor:
r"""
This function computes the (Normalized) Surface Dice (NSD) between the two tensors `y_pred` (referred to as
@@ -167,6 +187,13 @@ def compute_surface_dice(
distance_metric: The metric used to compute surface distances.
One of [``"euclidean"``, ``"chessboard"``, ``"taxicab"``].
Defaults to ``"euclidean"``.
spacing: spacing of pixel (or voxel). This parameter is relevant only if ``distance_metric`` is set to ``"euclidean"``.
If a single number, isotropic spacing with that value is used for all images in the batch. If a sequence of numbers,
the length of the sequence must be equal to the image dimensions. This spacing will be used for all images in the batch.
If a sequence of sequences, the length of the outer sequence must be equal to the batch size.
If inner sequence has length 1, isotropic spacing with that value is used for all images in the batch,
else the inner sequence length must be equal to the image dimensions. If ``None``, spacing of unity is used
for all images in batch. Defaults to ``None``.

Raises:
ValueError: If `y_pred` and/or `y` are not PyTorch tensors.
@@ -219,15 +246,22 @@ def compute_surface_dice(

nsd = np.empty((batch_size, n_class))

img_dim = y_pred.ndim - 2
spacing_list = prepare_spacing(spacing=spacing, batch_size=batch_size, img_dim=img_dim)

for b, c in np.ndindex(batch_size, n_class):
(edges_pred, edges_gt) = get_mask_edges(y_pred[b, c], y[b, c], crop=False)
if not np.any(edges_gt):
warnings.warn(f"the ground truth of class {c} is all 0, this may result in nan/inf distance.")
if not np.any(edges_pred):
warnings.warn(f"the prediction of class {c} is all 0, this may result in nan/inf distance.")

distances_pred_gt = get_surface_distance(edges_pred, edges_gt, distance_metric=distance_metric)
distances_gt_pred = get_surface_distance(edges_gt, edges_pred, distance_metric=distance_metric)
distances_pred_gt = get_surface_distance(
edges_pred, edges_gt, distance_metric=distance_metric, spacing=spacing_list[b]
)
distances_gt_pred = get_surface_distance(
edges_gt, edges_pred, distance_metric=distance_metric, spacing=spacing_list[b]
)

boundary_complete = len(distances_pred_gt) + len(distances_gt_pred)
boundary_correct = np.sum(distances_pred_gt <= class_thresholds[c]) + np.sum(
Loading