Skip to content
Kris Thielemans edited this page Jun 19, 2024 · 35 revisions

PETRIC: PET Rapid Image reconstruction Challenge

Main organisers: Charalampos Tsoumpas (RU Groningen), Christoph Kolbitsch (PTB), Matthias Ehrhardt (U Bath), Kris Thielemans (UCL)

Technical support (CoSeC, UKRI STFC): Casper da Costa-Luis, Edoardo Pasca

Overall description

We are organising the PET Rapid Image Reconstruction Challenge (PETRIC) which will run over the summer 2024 (mid June to 30 September). Its primary aim is to stimulate research into the development of fast PET image reconstruction algorithms applicable to real world data. Motivated by the success of the clinical translation of regularised image reconstruction in PET and other modalities, the challenge will focus on a smoothed version of the relative difference prior [1]. The participants will have access to a sizeable set of phantom data acquired on a range of clinical scanners. The main task for the participants of the challenge will be to reach a solution which is close to the converged image (e.g. in terms of mean ROI SUV) as quickly as possible (as measured in terms of computation time). This task will therefore require a balance between algorithm design and implementation optimisation. An example solution which reaches the target image quality but takes a long time will be provided at the beginning of the challenge. The PET raw data will be pre-processed to enable researchers to take part even if they have little experience in handling real world data. Open-source software (SIRF [2], STIR [3], CIL [4]) will be provided to develop and test the algorithms. Implementations must use STIR (via SIRF) projectors (together with provided multiplicative and additive projection data) such that reconstructed image quality and timing performance only depends on the reconstruction algorithm.

In the spirit of open science, all competitors who want to win cash prizes must make their GitHub repositories publicly available under an open-source license fulfilling the OSI definition after the challenge. However, to foster inclusivity we also welcome participation without making their code open access (see below for more details). Teams will be required to submit a maximum 1000 word abstract describing their algorithm.

Awards

The 3 teams that obtain the highest ranking will present their contributions at a workshop on Advanced Image Reconstruction Algorithms to be held in conjunction with the IEEE MIC 2024. Travel and subsistence will be covered for up to 2 participants per winning team. Online participation to the workshop will be possible.

In addition, the highest ranked 3 teams that provide an open-source solution will get a monetary award for the whole group:

  1. £500
  2. £300
  3. £150

Data

At the start of the challenge, we will provide example data of a small set of phantoms for participants to use for the development of their methods (we will aim for 3 different types of phantoms from 2-3 scanners -> 6 datasets). The data used for the actual competition and scoring of the different algorithms will be acquired after the end of the challenge at different sites including both Siemens and GE clinical scanners. This minimizes the bias towards a certain vendor or scanner model. In addition, this ensures that also groups involved in the organisation of the challenge can participate because at the time of the challenge, nobody has access to the final ground truth data. We welcome sites to provide acquired raw data for the final testing (together with Regions of Interest) in order to enlarge the database, see the dedicated page with more information. All phantom data will be made publicly available after the challenge. Participants are free to test their algorithms on additional datasets, for instance those available at the Zenodo SyneRBI Community. Due to difficulties with organising data sharing agreements, we will not include patient data in the current challenge.

Timeline

  • Start: PETRIC starts mid-June (exact time TBC). Example code and datasets will be available from this point at https://github.com/SyneRBI/PETRIC.
  • Finish: PETRIC closes on 30 September 2024 23:59 (GMT). Only submissions that fulfil the requirements listed below will be accepted.
  • Last date to make repository open access to qualify for monetary award: 14 October 2024 23:59 (GMT).
  • Announcement of final ranking: 15 October 2024.
  • Workshop at the IEEE MIC 2024: 2 November 2024

Spirit of the competition

The spirit of the competition is that the algorithm is a general-purpose algorithm, capable of reconstructing clinical PET data. The organizing committee has the right to disqualify any algorithms trying to violate that spirit.

Steering Committee

Steering Panel of CCP SyneRBI

Detail on example script, submission procedure and metrics

Example repository

We will provide an example repository on GitHub with an implementation of a (modified) BSREM algorithm of [5]. This example can be used as a template for your own modifications and give some indication of how fast your own algorithm is. A Docker container with all installed code will also be made available.

Submission via GitHub

The code must be in Python 3.11, using SIRF. We will provide a private repository for each team which will be used to share the code with us and can store data to be used by the algorithm.

The teams can submit maximum 3 reconstruction algorithms to the challenge; however, all algorithms must be in a single repository (e.g. on separate branches).

Your repository must contain a README.md file with at least the following sections:

  • Authors, institution, location.
  • Brief description of your algorithm and a link to the webpage of the competition.

The repository must contain a main.py file containing a class Submission which inherits from the CIL Algorithm class (as provided in the example script) such that we apply your algorithm automatically to given data sets.

During the challenge, your code will be used via GitHub Actions to evaluate its performance according to our metrics. These results will be posted on a public leaderboard. This also allows you to troubleshoot your code. If you discover problems with our set-up, please create an "Issue" on https://github.com/SyneRBI/PETRIC/issues. For the final evaluation, we will consider only the latest commit on each branch of your repository.

Computational Resources

We will run all reconstruction algorithms on the STFC cloud computing services platform. Each server will have a GPU and run under Ubuntu 22.04 with CUDA 12.3. The exact configuration of the computing services will be made available to the participants. Data (e.g. weights of a pre-trained network) can be downloaded before running the reconstruction algorithm but will be limited to 1 GB.

Evaluation

Among all entries we will determine the fastest algorithm to reach target image quality. To this end, all algorithms will be run until their solution obtains a specific relative error on all selected metrics or their runtime exceeds 1 hour. For every metric, results from all teams will be ranked according to the wall-clock time it took them to reach the threshold on our standard platform (ranking is from worst (1) to best (N), with best means “fastest to reach threshold”). The overall rank for each algorithm is the sum of all ranks for the individual metrics on each dataset. The algorithm with the highest overall rank wins the challenge. Note that due to difficulties with wall-clock timing as well as use of stochastic algorithms, each reconstruction will be run 10 times, and the median of wall-clock time will be used.

Optimisation Problem Details

The optimisation problem is a maximum a-posteriori estimate (MAP) using the smoothed relative difference prior (RDP), i.e.

$$\widehat{x} = \underset{x}{argmax}{\ L(x) - R(x)}$$

(Note that SIRF defines the objective function as above, but CIL algorithms minimise a function). The log-likelihood (up to terms independent of the image) is

$$L( \mathbf{y}; \widehat{\mathbf{y}} ) = \sum_{k}{y_{k}\log{\widehat{y}_k}-\widehat{y}_k} $$

with $\mathbf{y}$ a vector with the acquired data (histogrammed), and $\widehat{\mathbf{y}}$ the estimated data for a given image $\mathbf{x}$

$$\widehat{\mathbf{y}} = D\left( \mathbf{m} \right)\left( A\mathbf{x + a} \right)$$

with $\mathbf{m}$ multiplicative factors (corresponding to detection efficiencies and attenuation), $\mathbf{a}$ an “additive” background term (corresponding to an estimate of randoms and scatter, precorrected with $\mathbf{m}$), $A$ an approximation of the line integral operator [6], and $D(.)$ an operator converting a vector to a diagonal matrix.

Due to PET conventions, for some scanners, some data bins will always be zero (corresponding to “virtual crystals”), in which case corresponding elements in $\mathbf{m}$ will also be zero. The corresponding term in the log-likelihood is defined as zero. Other elements in $\mathbf{a}$ are guaranteed to be (strictly) positive ($a_{i} > 0$).

The smoothed Relative Difference Prior is given by:

$$ R\left( \mathbf{x} \right) = \sum_{i = 1}^{N}{\sum_{j \in N_{i}}^{}{w_{ij}\sqrt{\kappa_{i}\kappa_{j}}\frac{\left( x_{i} - x_{j} \right)^{2}}{x_{i} + x_{j} + \gamma\left| x_{i} - x_{j} \right| + \epsilon}}} $$

with

  • $N$ the number of voxels,

  • $N_{i}$ the neighbourhood of voxel $i$ (here taken as the 8 nearest neighbours in the 3 directions),

  • $w_{ij}$ weight factors (here taken as “horizontal” voxel-size divided by Euclidean distance between the $i$ and $j$ voxels),

  • $\mathbf{\kappa}$ an image to give voxel-dependent weights (here predetermined as the row-sum of the Hessian of the log-likelihood at an initial OSEM reconstruction, see eq. 25 in [7])

  • $\gamma$ an edge-preservation parameter (here taken as 2),

  • $\epsilon$ a small number to ensure smoothness (here predetermined from an initial OSEM reconstruction)

Metrics and thresholds

Every phantom dataset will be complemented by

  • One or more ROIs for “objects of interest” (e.g. “tumours”, “spheres”, “white/grey matter”) $R_{i}$

  • One “background ROI” $B$

  • One “whole phantom ROI” (marginally eroded) $W$

  • $r$: reference image (converged BSREM)

  • $c$: candidate reconstructed image

ROI calculations

  • $RMSE(W)$: Voxel-wise root mean squared error of $c$ with respect to the reference $r$ computed in region $W$ (similar for region $B$)

  • $MEAN\left( c; R_{i} \right)$: ROI mean for region $R_{i}$ computed for image $c$ (similar for region $B$ and reference $r$)

All metrics are related to how close they are to the reference value, normalised by the mean of the background region.

Please note that exact values of the thresholds are still to be determined.

  • $\frac{RMSE(W)}{MEAN(r; B)} < 0.1$

  • $\frac{RMSE(B)}{MEAN(r; B)} < 0.05$

  • $\frac{\left| MEAN\left( c; R_{i} \right) - MEAN\left( r; R_{i} \right) \right|}{MEAN(r; B)} < 0.01$

Summary

  • Prizes are available for the top ranked 3 teams who make their code publicly available.
  • Submitted algorithms may use 1 GB amount of data included in the repository
  • Submissions need to be based on SIRF and use Python
  • Submissions are via a private GitHub repository
  • The evaluation will be performed as described above.
Clone this wiki locally