Skip to content

Latest commit

 

History

History
117 lines (94 loc) · 8.19 KB

results.md

File metadata and controls

117 lines (94 loc) · 8.19 KB

Results

Our fully training details (i.e., checkpoints and training logs) are available in this google-drive link.

(update) ACDC dataset

  • The training set consists of 3 labelled volume and 67 unlabelled scans and the testing set includes 40 volumes.

    Methods DICE (%) Jaccard (%) ASD (voxel) 95HD (voxel)
    PS-MT 86.94 77.90 2.18 4.65
    BCP 87.59 78.67 0.67 1.90
    UniMatch 87.61 78.68 1.97 4.13
    Ours (details) 89.46 81.51 0.71 2.43
  • The training set consists of 7 labelled volume and 63 unlabelled scans and the testing set includes 40 volumes.

    Methods DICE (%) Jaccard (%) ASD (voxel) 95HD (voxel)
    PS-MT 88.91 80.79 1.83 4.96
    BCP 88.84 80.62 1.17 3.98
    BCPCaussl 89.66 81.79 0.93 3.67
    UniMatch 89.92 81.97 0.98 3.75
    Ours (details) 90.44 83.01 0.42 1.41
  • The training set consists of 14 labelled volume and 56 unlabelled scans and the testing set includes 40 volumes.

    Methods DICE (%) Jaccard (%) ASD (voxel) 95HD (voxel)
    PS-MT 89.94 81.90 1.15 4.01
    BCP 89.52 81.62 1.03 3.69
    BCPCaussl 89.99 82.34 0.88 3.60
    UniMatch 90.47 82.96 0.99 2.04
    Ours (details) 91.24 84.32 0.36 1.29

Left Atrium dataset

* denotes the results are from the best checkpoints' protocol

  • The training set consists of 16 labelled scans and 64 unlabelled scans and the testing set includes 20 scans.

    Methods DICE (%) Jaccard (%) ASD (voxel) 95HD (voxel)
    UAMT 88.88 80.21 2.26 7.32
    SASSNet 89.54 81.24 2.20 8.24
    LG-ER-MT 89.62 81.31 2.06 7.16
    DUWM 89.65 81.35 2.03 7.04
    DTC 89.42 80.98 2.10 7.32
    MC-Net 90.34 82.48 1.77 6.00
    Ours (training log) 90.94 83.47 1.79 5.49
    Ours* (training log) 91.51 84.40 1.79 5.63
  • The training set consists of 8 labelled scans and 72 unlabelled scans and the testing set includes 20 scans.

    Methods DICE (%) Jaccard (%) ASD (voxel) 95HD (voxel)
    UAMT 84.25 73.48 3.36 13.84
    SASSNet 87.32 77.72 2.55 9.62
    LG-ER-MT 85.54 75.12 3.77 13.29
    DUWM 85.91 75.75 3.31 12.67
    DTC 87.51 78.17 2.36 8.23
    MC-Net 87.71 78.31 2.18 9.36
    Ours (training log) 89.29 80.82 2.28 6.92
    Ours* (training log) 89.86 81.70 2.01 6.81
  • Parts of the tables are borrowed from here, and our train/val sample index follows UA-MT.

Pancreas dataset

* denotes the results are from the best checkpoints' protocol

  • (update) Following BCP, we have established our methods based on best evaluation protocol, and the results are shown as below:

    Methods DICE (%) Jaccard (%) ASD (voxel) 95HD (voxel)
    CoraNet 79.67 66.69 1.89 7.59
    BCP 82.91 70.97 2.25 6.43
    Ours* (training log) 83.36 71.70 1.74 7.34
  • The training set consists of 12 labelled scans and 50 unlabelled scans and the testing set includes 20 scans.

    Methods DICE (%) Jaccard (%) ASD (voxel) 95HD (voxel)
    UAMT 76.10 62.62 2.43 10.84
    SASSNet 76.39 63.17 1.42 11.06
    URPC 80.02 67.30 1.98 8.51
    MC-Net+ 80.59 68.08 1.74 6.47
    Ours (training log) 81.80 69.56 1.49 5.70
  • The training set consists of 6 labelled scans and 56 unlabelled scans and the testing set includes 20 scans.

    Methods DICE (%) Jaccard (%) ASD (voxel) 95HD (voxel)
    UAMT 66.44 52.02 3.03 17.04
    SASSNet 68.97 54.29 1.96 18.83
    URPC 73.53 59.44 7.85 22.57
    MC-Net+ 74.01 60.02 3.34 12.59
    Ours (training log) 79.22 66.04 2.57 8.46
  • Our train/val sample index follows MC-Net+.

BRaTS19 dataset

  • The training set consists of 50 labelled scans and 200 unlabelled scans and the testing set includes 20 scans.

    Methods DICE (%) Jaccard (%) ASD (voxel) 95HD (voxel)
    UAMT 85.32 75.93 1.98 8.68
    SASSNet 85.64 76.33 2.04 9.17
    URPC 85.38 76.14 1.87 8.36
    MC-Net+ 86.02 76.98 1.98 8.74
    Ours (training log) 86.69 77.69 1.93 8.04
  • The training set consists of 25 labelled scans and 225 unlabelled scans and the testing set includes 20 scans.

    Methods DICE (%) Jaccard (%) ASD (voxel) 95HD (voxel)
    UAMT 84.64 74.76 2.36 10.47
    SASSNet 84.73 74.89 2.44 9.88
    URPC 84.53 74.60 2.55 9.79
    MC-Net+ 84.96 75.14 2.36 9.45
    Ours (training log) 85.71 76.39 2.27 9.20