Skip to content
Christoph Kolbitsch edited this page Sep 10, 2024 · 30 revisions

PETRIC: PET Rapid Image reconstruction Challenge

Main organisers: Charalampos Tsoumpas (RU Groningen), Christoph Kolbitsch (PTB), Matthias Ehrhardt (U Bath), Kris Thielemans (UCL)

Technical support (CoSeC, UKRI STFC): Casper da Costa-Luis, Edoardo Pasca

Overall description

We are organising the PET Rapid Image Reconstruction Challenge (PETRIC) which will run over the summer 2024 (mid June to 30 September). Its primary aim is to stimulate research into the development of fast PET image reconstruction algorithms applicable to real world data. Motivated by the success of the clinical translation of regularised image reconstruction in PET and other modalities, the challenge will focus on a smoothed version of the relative difference prior [1]. The participants will have access to a sizeable set of phantom data acquired on a range of clinical scanners. The main task for the participants of the challenge will be to reach a solution which is close to the converged image (e.g. in terms of mean VOI SUV) as quickly as possible (as measured in terms of computation time). This task will therefore require a balance between algorithm design and implementation optimisation. An example solution which reaches the target converged image but takes a long time will be provided at the beginning of the challenge. The PET raw data will be pre-processed to enable researchers to take part even if they have little experience in handling real world data. Open-source software (SIRF [2], STIR [3], CIL [4]) will be provided to develop and test the algorithms. Implementations must use a given SIRF projector (together with provided multiplicative and additive projection data) such that reconstructed image quality and timing performance only depends on the reconstruction algorithm.

In the spirit of open science, all competitors who want to win cash prizes must make their GitHub repositories publicly available under an open-source license fulfilling the OSI definition after the challenge. However, to foster inclusivity we also welcome participation without making their code open access (see below for more details). Teams will be required to submit a maximum 1000 word abstract describing their algorithm.

Awards

The 3 teams that obtain the highest ranking will present their contributions at a workshop on Advanced Image Reconstruction Algorithms to be held in conjunction with the IEEE MIC 2024. Travel and subsistence will be covered for up to 2 participants per winning team. Online participation to the workshop will be possible. More information can be found here PETRIC Workshop and Award Ceremony.

In addition, the highest ranked 3 teams that provide an open-source solution will get a monetary award for the whole group:

  1. £500
  2. £300
  3. £150

Data

At the start of the challenge, we will provide example data of a small set of phantoms for participants to use for the development of their methods (we will aim for 3 different types of phantoms from 2-3 scanners). The data used for the actual competition and scoring of the different algorithms will be acquired after the end of the challenge at different sites including both Siemens and GE clinical scanners. This minimizes the bias towards a certain vendor or scanner model. In addition, this ensures that also groups involved in the organisation of the challenge can participate because at the time of the challenge, nobody has access to the final ground truth data. We welcome sites to provide acquired raw data for the final testing (together with Regions of Interest) in order to enlarge the database, see the dedicated page with more information. All phantom data will be made publicly available after the challenge. Participants are free to test their algorithms on additional datasets, for instance those available at the Zenodo SyneRBI Community. Due to difficulties with organising data sharing agreements, we will not include patient data in the current challenge.

Timeline

Spirit of the competition

The spirit of the competition is that the algorithm is a general-purpose algorithm, capable of reconstructing clinical PET data. The organizing committee has the right to disqualify any algorithms trying to violate that spirit.

Steering Committee

Steering Panel of CCP SyneRBI

Detail on example script, submission procedure and metrics

Example repository

We will provide an example repository on GitHub with an implementation of a (modified) BSREM algorithm of [5]. This example can be used as a template for your own modifications and give some indication of how fast your own algorithm is. A Docker container with all installed code will also be made available.

Submission via GitHub

The code must be in Python 3.11, using SIRF. We will provide a private repository for each team which will be used to share the code with us and can store data to be used by the algorithm.

The teams can submit maximum 3 reconstruction algorithms to the challenge; however, all algorithms must be in a single repository (e.g. on separate branches).

Your repository must contain a README.md file with at least the following sections:

  • Authors, institution, location.
  • Brief description of your algorithm and a link to the webpage of the competition.

The repository must contain a main.py file containing a class Submission which inherits from the CIL Algorithm class (as provided in the example script) such that we apply your algorithm automatically to given data sets.

During the challenge, your code will be used via GitHub Actions to evaluate its performance according to our metrics. These results will be posted on the public leaderboard. This also allows you to troubleshoot your code. If you discover problems with our set-up, please create an "Issue" on https://github.com/SyneRBI/PETRIC/issues. For the final evaluation, we will consider only the latest commit on each branch of your repository.

Computational Resources

We will run all reconstruction algorithms on the STFC cloud computing services platform. Each server will have an AMD EPYC 7452 32-Core CPU and NVIDIA A100-40GB GPU running under Ubuntu 22.04 with CUDA 11.8. Data (e.g. weights of a pre-trained network) can be downloaded before running the reconstruction algorithm but will be limited to 1 GB.

Evaluation

Among all entries we will determine the fastest algorithm to reach target image quality. To this end, all algorithms will be run until their solution obtains a specific relative error on all selected metrics or their runtime exceeds 1 hour. For every metric, results from all teams will be ranked according to the wall-clock time it took them to reach the threshold on our standard platform (ranking is from worst (1) to best (N), with best means “fastest to reach threshold”). The overall rank for each algorithm is the sum of all ranks for the individual metrics on each dataset. The algorithm with the highest overall rank wins the challenge. Note that due to difficulties with wall-clock timing as well as use of stochastic algorithms, each reconstruction will be run 10 times, and the median of wall-clock time will be used.

Optimisation Problem Details

The optimisation problem is a maximum a-posteriori estimate (MAP) using the smoothed relative difference prior (RDP), i.e.

$$\widehat{x} = \underset{x \in C}{\arg\max}{\ L(x) - R(x)}$$

where the constraint set $C$ is defined as $x \in C$ iff $x_i \geq 0$ if $i \in M$ and $0$ otherwise for some mask $M$ provided. Note that SIRF defines the objective function as above, but CIL algorithms minimise a function. The log-likelihood (up to terms independent of the image) is

$$L( \mathbf{y}; \widehat{\mathbf{y}} ) = \sum_{k}{y_{k}\log{\widehat{y}_k}-\widehat{y}_k} $$

with $\mathbf{y}$ a vector with the acquired data (histogrammed), and $\widehat{\mathbf{y}}$ the estimated data for a given image $\mathbf{x}$

$$\widehat{\mathbf{y}} = D\left( \mathbf{m} \right)\left( A\mathbf{x + a} \right)$$

with $\mathbf{m}$ multiplicative factors (corresponding to detection efficiencies and attenuation), $\mathbf{a}$ an “additive” background term (corresponding to an estimate of randoms and scatter, precorrected with $\mathbf{m}$), $A$ an approximation of the line integral operator [6], and $D(.)$ an operator converting a vector to a diagonal matrix.

Due to PET conventions, for some scanners, some data bins will always be zero (corresponding to “virtual crystals”), in which case corresponding elements in $\mathbf{m}$ will also be zero. The corresponding term in the log-likelihood is defined as zero. Other elements in $\mathbf{a}$ are guaranteed to be (strictly) positive ($a_{i} > 0$).

The smoothed Relative Difference Prior is given by:

$$ R\left( \mathbf{x} \right) = \frac{1}{2}\sum_{i = 1}^{N}{\sum_{j \in N_{i}}^{}{w_{ij}\kappa_{i}\kappa_{j}\frac{\left( x_{i} - x_{j} \right)^{2}}{x_{i} + x_{j} + \gamma\left| x_{i} - x_{j} \right| + \epsilon}}} $$

with

  • $N$ the number of voxels,

  • $N_{i}$ the neighbourhood of voxel $i$ (here taken as the 8 nearest neighbours in the 3 directions),

  • $w_{ij}$ weight factors (here taken as “horizontal” voxel-size divided by Euclidean distance between the $i$ and $j$ voxels),

  • $\mathbf{\kappa}$ an image to give voxel-dependent weights (here predetermined as the row-sum of the Hessian of the log-likelihood at an initial OSEM reconstruction, see eq. 25 in [7])

  • $\gamma$ an edge-preservation parameter (here taken as 2),

  • $\epsilon$ a small number to ensure smoothness (here predetermined from an initial OSEM reconstruction)

Metrics and thresholds

Each dataset contains:

  • $r$: (converged BSREM) reference image
  • $W$: (marginally eroded) whole object VOI (volume of interest)
  • $B$: background VOI
  • $R_i$: one or more VOIs (“tumours”, “spheres”, “white/grey matter”, etc.)

metric calculations (thresholds updated 25 August):

leaderboard metric name calculation & threshold
whole object RMSE $\frac{RMSE(\theta; W)}{MEAN(r; B)} < 0.01$
background RMSE $\frac{RMSE(\theta; B)}{MEAN(r; B)} < 0.01$
VOI AEM (absolute error of the mean) $\frac{\left|MEAN(\theta; R_i) - MEAN(r; R_i)\right|}{MEAN(r; B)} < 0.005$

where:

  • $\theta$: your candidate reconstructed image
  • $RMSE(\cdot; W)$: voxel-wise root mean squared error computed in region $W$ with respect to the reference $r$
  • $MEAN(\cdot; R_i)$: mean for region $R_i$

Reference algorithm

As our reference algorithm we use a modified version of BSREM (Block-sequential regularized expectation maximization). This converges to the solution of the MAP reconstruction problem but unfortunately can require a high number of iterations. An example demonstrating PET image reconstruction with BSREM using SIRF can be found in this notebook.

Example submission

To help you get started we have already created an example submission. Of course this will most likely not win the challenge but hopefully give you an idea of how to implement your own algorithm with the framework of PETRIC. Check our page with more information on the software available.

Support

Summary

  • Prizes are available for the top ranked 3 teams who make their code publicly available.
  • Submitted algorithms may use 1 GB amount of data included in the repository
  • Submissions need to be based on SIRF and use Python
  • Submissions are via a private GitHub repository
  • The evaluation will be performed as described above.

References

[1] Nuyts, J., Bequé, D., Dupont, P., & Mortelmans, L. (2002). A Concave Prior Penalizing Relative Differences for Maximum-a-Posteriori Reconstruction in Emission Tomography. IEEE Transactions on Nuclear Science, 49(1), 56–60.

[2] Evgueni Ovtchinnikov, Richard Brown, Christoph Kolbitsch, Edoardo Pasca, Casper da Costa-Luis, Ashley G. Gillman, Benjamin A. Thomas, Nikos Efthymiou, Johannes Mayer, Palak Wadhwa, Matthias J. Ehrhardt, Sam Ellis, Jakob S. Jørgensen, Julian Matthews, Claudia Prieto, Andrew J. Reader, Charalampos Tsoumpas, Martin Turner, David Atkinson, Kris Thielemans (2020) SIRF: Synergistic Image Reconstruction Framework, Computer Physics Communications 249, doi: https://doi.org/10.1016/j.cpc.2019.107087. https://github.com/SyneRBI/SIRF/

[3] Thielemans, K., Tsoumpas, C., Mustafovic, S., Beisel, T., Aguiar, P., Dikaios, N., Jacobson, M.W., 2012. STIR: software for tomographic image reconstruction release 2. Physics in Medicine and Biology 57, 867--883. https://doi.org/10.1088/0031-9155/57/4/867 https://github.com/UCL/STIR/

[4] Jørgensen, J.S., Ametova, E., Burca, G., Fardell, G., Papoutsellis, E., Pasca, E., Thielemans, K., Turner, M., Warr, R., Lionheart, W.R.B., Withers, P.J., 2021. Core Imaging Library - Part I: a versatile Python framework for tomographic imaging. Phil Trans Roy Soc A 379, 20200192. https://doi.org/10.1098/rsta.2020.0192 https://github.com/TomographicImaging/CIL

[5] S. Ahn and J. A. Fessler, ‘Globally convergent image reconstruction for emission tomography using relaxed ordered subsets algorithms’, IEEE Transactions on Medical Imaging, vol. 22, no. 5, pp. 613–626, May 2003, doi: 10.1109/tmi.2003.812251.

[6] Schramm, G., Thielemans, K., 2024. PARALLELPROJ—an open-source framework for fast calculation of projections in tomography. Front. Nucl. Med. 3. https://doi.org/10.3389/fnume.2023.1324562

[7] Tsai, Y.-J., Schramm, G., Ahn, S., Bousse, A., Arridge, S., Nuyts, J., Hutton, B.F., Stearns, C.W., Thielemans, K., 2020. Benefits of Using a Spatially-Variant Penalty Strength With Anatomical Priors in PET Reconstruction. IEEE Transactions on Medical Imaging 39, 11–22. https://doi.org/10.1109/TMI.2019.2913889