Skip to content

Commit

Permalink
Update cardiac-oct-segmentation.md (#543)
Browse files Browse the repository at this point in the history
  • Loading branch information
Gonzalo2408 committed Sep 14, 2023
1 parent 72b6ca8 commit aa116aa
Showing 1 changed file with 33 additions and 3 deletions.
36 changes: 33 additions & 3 deletions content/pages/projects/cardiac-oct-segmentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,40 @@ Currently, however, the use of OCT in daily clinical practice is limited due to
![image]({{ IMGURL }}/images/projects/cardiac-oct.png)

## Solution
Our group already has developed a prototype algorithm for the segmentation of intracoronary OCT images (Figure 1). Within the current project, we aim to improve this prototype in terms of segmentation of frequently occurring structures (e.g. lumen, vascular layers, lipid, calcium), to improve detection of rarer structures, and to identify markers of plaque vulnerability. Main scientific challenges lie in the development of efficient annotation strategies for individual frames (per-frame analysis) as well as for full pullbacks (multi-frame analysis). Furthermore, AI related challenges include the development of methods to reject unreliable results (outlier classes) and to perform under low-data regime.
A semantic segmentation algorithm based on the no-new U-Net (nnU-Net) architecture was developed that segments 12 different targets (lumen, guidewire, intima, lipid, calcium, media, catheter, sidebranch, red and white thrombus, dissection and plaque rupture) in intracoronary OCT scans. A total of 4 models were trained: one of them was trained on 2D OCT frames, and the other three were trained on pseudo 3D input, in which k frames before and after the frame with the ground truth annotation are also included as input. Thus, we trained these three models using k = 1, k = 2 and k = 3 frames before and after. As preprocessing steps, a circular mask and resizing interpolator are employed to create a curated dataset suitable for training. Figure 1 shows the preprocessing and training framework.

To extend current ability of OCT-integrated AI systems to detect luminal border, calcium and the external elastic membrane, we aim to develop a semantic segmentation deep learning algorithm for automated image annotation and target automated segmentation of the lumen, intima, calcium, lipids, media and measures of plaque vulnerability and related complications. The segmentation algorithm will be based on a combination of generative models and semi-supervised learning to address performance in low-data and -annotation regimes. To identify unreliable segmentations, uncertainty-aware predictions will be achieved by model assembling at different granularity levels, and compared to a recently proposed efficient model based on deep deterministic uncertainty.
<p>
<img src="/images/projects/nnunet_framework_cardiac_oct.png" alt>
<span style="font-style: normal;">
<strong>Figure 1.</strong> Preprocessing and training frameworks, for the 2D case and the k = 3 case for the pseudo 3D model.
</span>
</p>


After the model training, a post-processing framework based on automatic lipid and calcium measurements is designed, based on the predicted segmentations. This automated analysis measures the Fibrous Cap Thickness (FCT) and lipid arc for lipid, assesing for Thin-Cap Fibroatheroma (TCFA) detection, and the calcium arc, thickness and depth for calcium. A threshold for lipid and calcium size is estimated by computing the ROC curves and finding the mimumum nº of pixels that a lipid or calcium region must contain in order to consider it as such. A final analysis addresses for the "black-box" problem that many DL models suffer. This is done by retrieving the features maps after each convolutional layer for Explainable AI (XAI). For uncertainty estimation, the reliability curves and the total Expected Calibration Error (ECE), including the ECE for lipid and calcium, for the test set are obtained. Further uncertainty analysis on the final probability maps are performed in order to validate these maps as a measure for uncertainty, focusing into lipid and calcium regions. The entropy per pixel is also obtained in order to perform correlation analysis between the entropy on lipid and calcium regions with the DICE score. Below, Figure 2 shows this post-processing framework.

<p>
<img src="/images/projects/oct_post_proc_framework.png" alt>
<span style="font-style: normal;">
<strong>Figure 2.</strong> Post-processing framework, from the automated measurements on lipid calcium, to XAI and uncertainty estimation analysis
</span>
</p>

## Data
We will use OCT data from the PECTUS-obs study (https://pubmed.ncbi.nlm.nih.gov/34233996/) to develop and internally validate the algorithm. In total, this database includes 498 intracoronary OCT pullbacks obtained from patients with an acute myocardial infarction and multi-vessel disease. Each pullback consists of 540 frames, adding up to a total of 268,920 individual frames. Currently, the annotation procedure if as follows. All OCT-frames are divided into a training and validation set (ratio 9:1). Manual annotation is performed on every 40th sample of the training set, and supplemented with frames on which “schoolbook examples” of specific structures are visible. Of the training set, 108/498 pullbacks are currently annotated. External validation will be performed on several other prospective databases including high-risk and low-risk populations.
We will use OCT data from the PECTUS-obs study (https://pubmed.ncbi.nlm.nih.gov/34233996/) to develop and internally validate the algorithm. In total, 2028 OCT frames from this dataset were manually segmented by an expert annotator. A random 9:1 train/test split was performed, obtaining 1810 frames for training and 218 for testing.

## Results
Every trained nnU-Net model achieved similar DICE, sensitivity, specificity, PPV and NPV Cohen's Kappa values, with the k = 3 model as the final model. Healthy frames are outstandingly performed, with a mean DICE close to 1. The model achieved a DICE = 0.586 for lipid segmentation, with a Kappa = 0.773. On the other hand, the DICE was 0.492 for calcium with a Kappa of 0.749. There is room to improve for these results. As for the rarer structures, sidebranch and red thrombus are mildly segmented, with a DICE = 0.501 for sidebranch and DICE = 0.609 for red thrombus. As for white thrombus and plaque rupture, the model performance becomes worse, altohugh these regions are detected at a level equivalent to random chance. Finally, the performance of dissection is unknown, since there are no dissections in the test set.

As for the automated assesment of lipidic and calcified regions, an Intra Class Correlation (ICC(2,1)) of 0.736 and 0.768 was obtained for lipid arc and FCT, respectively. On the other hand, and ICC(2,1) of 0.791, 0.849 and 0.633 was obtained for calcium depth, arc and thickness. In addition, a DICE coefficient based on the lipid and calcium arc overlap with the ground truth was estimated, being of 0.705 for the lipid arc and 0.592 for the calcium arc. Finally, an optimal threshold for lipid size was obtained at 1700 pixels, and 100 pixels for calcium size. Lipid or calcium regions that are smaller than these respective thresholds are simply not considered. This improved the specificity in both lipid and calcium, going from 0.785 to 0.845 in lipid, and from 0.921 to 0.933 in calcium. The sensitivity remained unchanged for lipid, being of 1, while it decreased for calcium from 0.852 to 0.833.

As for the uncertainty estimation results, the model was found to be excellent calibrated, with a total ECE = 0.0197. However, a slight increase was found in the ECE to 0.109 in calcium, and a bigger increase for lipid, with an ECE of 0.3035. By analysing the obtained reliability diagrams, the model was more biased towards over-predicting lipid. As for the average confidence values, the model is less confident on segmented lipid and calcium which is actually incorrect (with a mean confidence of 0.684 and 0.578 for lipid and calcium, respectively). However, the model was more confident when missing calcium (mean confidence = 0.869), while for lipid there is no result since no lipid is missed. For TP, the total confidence was overall higher, being of 0.903 for lipid and 0.837 for calcium. Finally, the obtained entropy was very close to 0 for both lipid and calcium, although the correlation with the DICE score was bigger for calcium (-0.803) than for lipid (-0.483).

## Conclusion
In this project, a semantic segmentation model based on the nnU-Net was developed to segment intracoronary OCT scans. By leveraging contiguous frames to the ground truth, a pseudo 3D was also developed, using either 1, 2 or 3 frames before and after the frame with ground truth. While the 2D and pseudo 3D approach seem very similar, it was concluded that the pseudo 3D model with k = 3 frames before and after provided the overall best results. The post-processing algorithm for lipid and calcium automated assesement can be further used to peform fast and accurate measurements, altough further improvements need to be done in order to detect TCFAs in a reliable way. The model transparency could be further analysed and correlated with the model's output, such as by using other techniques like Grad-CAM. Finally, the model showed great reliability in its confidence values, meaning that the confidence maps could potentially be used in clinical practice as an uncertainty map and see which regions of the OCT frame are less confident. The model could be improved by using a 3D network, and for this more annotations should be included, plus including more cases of rarer structures in order to improve the model's capabilities to detect these regions (white thrombus, dissection or plaque rupture). Other improvements would include to tune the train set annotations and include other common regions such as layered plaque, include OCT scans from different scanners ot annotators, or train the algorithm on the original grayscale version of the OCT scans.

You can try the pseudo 3D (k = 3) algorithm on Grand Challenge: <a href="https://grand-challenge.org/algorithms/cardiac-oct/" class="btn btn-primary btn-lg my-3">Try out the algorithm</a>

The code for this project can be found in the following [Github repository](https://github.com/Gonzalo2408/CardiacOCT-project).


0 comments on commit aa116aa

Please sign in to comment.