Skip to content

Commit

Permalink
Merge branch 'master' into infinite_iterations
Browse files Browse the repository at this point in the history
  • Loading branch information
MargaretDuff authored Nov 25, 2024
2 parents f872ab5 + 411c679 commit bec6a03
Show file tree
Hide file tree
Showing 34 changed files with 1,451 additions and 608 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ jobs:
echo '${{ secrets.STFC_SSH_KEY }}' > ./key
chmod 600 ./key
ssh -o StrictHostKeyChecking=no -i ./key ${{ secrets.STFC_SSH_HOST }} \
'conda index --bz2 --zst --run-exports --channeldata --rss -n ccpi ${{ secrets.STFC_SSH_CONDA_DIR }}'
'bash -lic "conda index --bz2 --zst --run-exports --channeldata --rss -n ccpi ${{ secrets.STFC_SSH_CONDA_DIR }}"'
docs:
defaults: {run: {shell: 'bash -el {0}', working-directory: docs}}
runs-on: ubuntu-latest
Expand Down
18 changes: 17 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,17 @@
* 24.x.x
- Bug fixes:
- Fix bug with 'median' and 'mean' methods in Masker averaging over the wrong axes.
- `SPDHG` `gamma` parameter is now applied correctly, so that the product of the dual and primal step sizes remains constant as `gamma` varies (#1644)
- Enhancements:
- Removed multiple exits from numba implementation of KullbackLeibler divergence (#1901)
- Updated the `SPDHG` algorithm to take a stochastic `Sampler` and to more easily set step sizes (#1644)
- Dependencies:
- Added scikit-image to CIL-Demos conda install command as needed for new Callbacks notebook.
- Changes that break backwards compatibility:
- Deprecated `norms` and `prob` in the `SPDHG` algorithm to be set in the `BlockOperator` and `Sampler` respectively (#1644)


* 24.2.0
- New Features:
- Added SVRG and LSVRG stochastic functions (#1625)
- Added SAG and SAGA stochastic functions (#1624)
Expand All @@ -17,7 +30,7 @@
- Internal refactor: Separate framework into multiple files (#1692)
- Allow the SIRT algorithm to take `initial=None` (#1906)
- Add checks on equality method of `AcquisitionData` and `ImageData` for equality of data type and geometry (#1919)
- Add check on equality method of `AcquisitionGeometry` for equality of dimension labels (#1919)
- Add check on equality method of `AcquisitionGeometry` for equality of dimension labels (#1919)
- Testing:
- New unit tests for operators and functions to check for in place errors and the behaviour of `out` (#1805)
- Updates in SPDHG vs PDHG unit test to reduce test time and adjustments to parameters (#1898)
Expand All @@ -28,8 +41,10 @@
- Make Binner accept accelerated=False (#1887)
- Added checks on memory allocations within `FiniteDifferenceLibrary.cpp` and verified the status of the return in `GradientOperator` (#1929)
- Build release version of `cilacc.dll` for Windows. Previously was defaulting to the debug build (#1928)
- Armijo step size rule now by default initialises the search for a step size from the previously calculated step size (#1934)
- Changes that break backwards compatibility:
- CGLS will no longer automatically stop iterations once a default tolerance is reached. The option to pass `tolerance` will be deprecated to be replaced by `optimisation.utilities.callbacks` (#1892)


* 24.1.0
- New Features:
Expand All @@ -54,6 +69,7 @@
- BlockOperator that would return a BlockDataContainer of shape (1,1) now returns the appropriate DataContainer. BlockDataContainer direct and adjoint methods accept DataContainer as parameter (#1802).
- BlurringOperator: remove check for geometry class (old SIRF integration bug) (#1807)
- The `ZeroFunction` and `ConstantFunction` now have a Lipschitz constant of 1. (#1768)
- Update dataexample remote data download to work with windows and use zenodo_get for data download (#1774)
- Changes that break backwards compatibility:
- Merged the files `BlockGeometry.py` and `BlockDataContainer.py` in `framework` to one file `block.py`. Please use `from cil.framework import BlockGeometry, BlockDataContainer` as before (#1799)
- Bug fix in `FGP_TV` function to set the default behaviour not to enforce non-negativity (#1826).
Expand Down
2 changes: 2 additions & 0 deletions NOTICE.txt
Original file line number Diff line number Diff line change
Expand Up @@ -65,8 +65,10 @@ Ashley Gillman (2024) -12
Zeljko Kereta (2024) - 5
Evgueni Ovtchinnikov (2024) -1
Georg Schramm (2024) - 13
Sam Porter (2024) - 5
Joshua Hellier (2024) - 3
Nicholas Whyatt (2024) - 1
Rasmia Kulan (2024) - 1

CIL Advisory Board:
Llion Evans - 9
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,13 @@ We recommend using either [`miniconda`](https://docs.conda.io/projects/miniconda
Install a new environment using:

```sh
conda create --name cil -c conda-forge -c https://software.repos.intel.com/python/conda -c ccpi cil=24.1.0
conda create --name cil -c conda-forge -c https://software.repos.intel.com/python/conda -c ccpi cil=24.2.0 ipp=2021.12
```

To install CIL and the additional packages and plugins needed to run the [CIL demos](https://github.com/TomographicImaging/CIL-Demos) install the environment with:

```sh
conda create --name cil -c conda-forge -c https://software.repos.intel.com/python/conda -c ccpi cil=24.1.0 astra-toolbox=*=cuda* tigre ccpi-regulariser tomophantom ipykernel ipywidgets
conda create --name cil -c conda-forge -c https://software.repos.intel.com/python/conda -c ccpi cil=24.2.0 ipp=2021.12 astra-toolbox=*=cuda* tigre ccpi-regulariser tomophantom ipykernel ipywidgets scikit-image
```

where:
Expand Down
13 changes: 12 additions & 1 deletion Wrappers/Python/cil/optimisation/algorithms/FISTA.py
Original file line number Diff line number Diff line change
Expand Up @@ -213,8 +213,19 @@ def update_objective(self):
.. math:: f(x) + g(x)
"""
self.loss.append(self.f(self.x_old) + self.g(self.x_old))
self.loss.append(self.calculate_objective_function_at_point(self.x_old))

def calculate_objective_function_at_point(self, x):
""" Calculates the objective at a given point x
.. math:: f(x) + g(x)
Parameters
----------
x : DataContainer
"""
return self.f(x) + self.g(x)

class FISTA(ISTA):

Expand Down
23 changes: 20 additions & 3 deletions Wrappers/Python/cil/optimisation/algorithms/GD.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ def set_up(self, initial, objective_function, step_size, preconditioner):
log.info("%s setting up", self.__class__.__name__)

self.x = initial.copy()
self.objective_function = objective_function
self._objective_function = objective_function

if step_size is None:
self.step_size_rule = ArmijoStepSizeRule(
Expand All @@ -106,7 +106,7 @@ def set_up(self, initial, objective_function, step_size, preconditioner):

def update(self):
'''Performs a single iteration of the gradient descent algorithm'''
self.objective_function.gradient(self.x, out=self.gradient_update)
self._objective_function.gradient(self.x, out=self.gradient_update)

if self.preconditioner is not None:
self.preconditioner.apply(
Expand All @@ -117,7 +117,7 @@ def update(self):
self.x.sapyb(1.0, self.gradient_update, -step_size, out=self.x)

def update_objective(self):
self.loss.append(self.objective_function(self.solution))
self.loss.append(self._objective_function(self.solution))

def should_stop(self):
'''Stopping criterion for the gradient descent algorithm '''
Expand All @@ -132,3 +132,20 @@ def step_size(self):
else:
raise TypeError(
"There is not a constant step size, it is set by a step-size rule")

def calculate_objective_function_at_point(self, x):
""" Calculates the objective at a given point x
.. math:: f(x) + g(x)
Parameters
----------
x : DataContainer
"""
return self._objective_function(x)

@property
def objective_function(self):
warn('The attribute `objective_function` will be deprecated in the future. Please use `calculate_objective_function_at_point` instead.', DeprecationWarning, stacklevel=2)
return self._objective_function
32 changes: 16 additions & 16 deletions Wrappers/Python/cil/optimisation/algorithms/SIRT.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,23 +27,22 @@


class SIRT(Algorithm):

r"""Simultaneous Iterative Reconstruction Technique, see :cite:`Kak2001`.
Simultaneous Iterative Reconstruction Technique (SIRT) solves
the following problem
.. math:: A x = b
The SIRT algorithm is
The SIRT update step for iteration :math:`k` is given by
.. math:: x^{k+1} = \mathrm{proj}_{C}( x^{k} + \omega * D ( A^{T} ( M * (b - Ax^{k}) ) ) ),
.. math:: x^{k+1} = \mathrm{proj}_{C}( x^{k} + \omega D ( A^{T} ( M (b - Ax^{k}) ) ) ),
where,
:math:`M = \frac{1}{A*\mathbb{1}}`,
:math:`M = \frac{1}{A\mathbb{1}}`,
:math:`D = \frac{1}{A^{T}\mathbb{1}}`,
:math:`\mathbb{1}` is a :code:`DataContainer` of ones,
:math:`\mathrm{prox}_{C}` is the projection over a set :math:`C`,
:math:`\mathrm{proj}_{C}` is the projection over a set :math:`C`,
and :math:`\omega` is the relaxation parameter.
Parameters
Expand All @@ -63,9 +62,6 @@ class SIRT(Algorithm):
A function with :code:`proximal` method, e.g., :class:`.IndicatorBox` function and :meth:`.IndicatorBox.proximal`,
or :class:`.TotalVariation` function and :meth:`.TotalVariation.proximal`.
kwargs:
Keyword arguments used from the base class :class:`.Algorithm`.
Note
----
If :code:`constraint` is not passed, :code:`lower` and :code:`upper` are used to create an :class:`.IndicatorBox` and apply its :code:`proximal`.
Expand All @@ -77,21 +73,22 @@ class SIRT(Algorithm):
The preconditioning arrays (weights) :code:`M` and :code:`D` used in SIRT are defined as
.. math:: M = \frac{1}{A*\mathbb{1}} = \frac{1}{\sum_{j}a_{i,j}}
.. math:: D = \frac{1}{A*\mathbb{1}} = \frac{1}{\sum_{i}a_{i,j}}
.. math:: M = \frac{1}{A\mathbb{1}}
.. math:: D = \frac{1}{A^T\mathbb{1}}
Examples
--------
.. math:: \underset{x}{\mathrm{argmin}} \frac{1}{2}\| x - d\|^{2}
.. math:: \underset{x}{\mathrm{argmin}} \frac{1}{2}\| Ax - d\|^{2}
>>> sirt = SIRT(initial = ig.allocate(0), operator = A, data = d, max_iteration = 5)
>>> sirt = SIRT(initial = ig.allocate(0), operator = A, data = d)
"""


def __init__(self, initial=None, operator=None, data=None, lower=None, upper=None, constraint=None, **kwargs):
"""Constructor of SIRT algorithm"""

super(SIRT, self).__init__(**kwargs)

Expand Down Expand Up @@ -140,10 +137,12 @@ def set_up(self, initial, operator, data, lower=None, upper=None, constraint=Non

@property
def relaxation_parameter(self):
"""Get the relaxation parameter :math:`\omega`"""
return self._relaxation_parameter

@property
def D(self):
"""Get the preconditioning array :math:`D`"""
return self._Dscaled / self._relaxation_parameter

def set_relaxation_parameter(self, value=1.0):
Expand All @@ -164,6 +163,7 @@ def set_relaxation_parameter(self, value=1.0):


def _set_up_weights(self):
"""Set up the preconditioning arrays M and D"""
self.M = 1./self.operator.direct(self.operator.domain_geometry().allocate(value=1.0))
self._Dscaled = 1./self.operator.adjoint(self.operator.range_geometry().allocate(value=1.0))

Expand Down Expand Up @@ -196,9 +196,9 @@ def _remove_nan_or_inf(self, datacontainer, replace_with=1.0):

def update(self):

r""" Performs a single iteration of the SIRT algorithm
r""" Performs a single iteration of the SIRT algorithm. The update step for iteration :math:`k` is given by
.. math:: x^{k+1} = \mathrm{proj}_{C}( x^{k} + \omega * D ( A^{T} ( M * (b - Ax) ) ) )
.. math:: x^{k+1} = \mathrm{proj}_{C}( x^{k} + \omega D ( A^{T} ( M (b - Ax^{k}) ) ) )
"""

Expand All @@ -218,7 +218,7 @@ def update(self):
self.x=self.constraint.proximal(self.x, tau=1)

def update_objective(self):
r"""Returns the objective
r""" Appends the current objective value to the list of previous objective values
.. math:: \frac{1}{2}\|A x - b\|^{2}
Expand Down
Loading

0 comments on commit bec6a03

Please sign in to comment.