-
Notifications
You must be signed in to change notification settings - Fork 2
FAQ
See our README for instructions.
The metrics used in the challenge are implemented in petric.py
, look for QualityMetrics
.
You have to use sirf.STIR.AcquisitionModelUsingParallelproj
. The provided examples provide a few ways how you can implement the data-fit part of the algorithm, e.g. directly in terms of the acquisition model, or SIRF objective functions via sirf.STIR.make_Poisson_loglikelihood
. You could even use the CIL KullbackLeibler
class. Note that when using the SIRF objective function, its set_up()
call includes computing the "sensitivity" (backproj of 1), which is then accessible via get_subset_sensitivity(0)
.
Note that while SIRF acquisition models for PET allow you to forward/back project only a subset of the data, this is sub-optimal for sirf.STIR.AcquisitionModelUsingParallelproj
. All examples are therefore written in terms of subsets of data.
In this Challenge, settings for "span" (or axial compression), view or TOF mashing are determined by the data as provided by the vendor. Of course, feel free to apply any data reductions yourself (sirf.STIR.AcquisitionData.rebin
would be handy for that), but remember that the reference solution is computed from the original data.
You have to use sirf.STIR.CudaRelativeDifferencePrior
(if your test system doesn't have CUDA, you can use sirf.STIR.RelativeDifferencePrior
instead, with equivalent results). Parameters of the prior are fixed (use data = get_data()
and data.prior
). You can check the details in petric.py
.
See https://github.com/SyneRBI/PETRIC/issues/85
These are fixed by the provided OSEM_image.hv
. It would normally be the standard as used by the vendor recon. In some cases, we cropped the image a bit to avoid too many zeroes.