-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pdhg strongly convex #1030
Pdhg strongly convex #1030
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we need to:
- document the additional variables
gamma_g
andgamma_fconj
in the docstring - add unittests (basic, not just one full denoising example). Something like set up a PDHG without the gammas, do 1 iteration and check that the
sigma
andtau
remain the same. Set up a PDHG with the gammas, do 1 iteration and check that thesigma
andtau
have been changed to what you expect.
@@ -17,11 +17,12 @@ | |||
|
|||
from cil.optimisation.algorithms import Algorithm | |||
import warnings | |||
|
|||
import numpy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import numpy as np
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Great to get a strong convexity version in but I'm not sure about the parameters being given as additional parameters to PDHG. Wouldn't they belong more naturally as attributes of the Functions, similar to Lipschitz? With default being None and automatically set on initialising the Function whenever possible? Then PDHG would check the f and g Functions for these parameters and if different from None the strongly convex version of the algorithm would be used. Then the user wouldn't need to worry about understanding what these parameters are, computing them (how would they do that?) and providing them as input. Not sure if this approach has been discussed? |
@jakobsj 's got a great point here. If we put the gammas as properties of
|
That was my initial plan, but we decided to not follow this route. Basically, we cannot do it atm. As the In this example, we use the strongly convexity case of PDHG, when the conjugate of the If you run the example above, you will see the following figure (label is wrong, **should be Strongly Convex (fconj) **) |
Yes I remember the discussion now. But I don't see how it's not possible. I
don't think we need the convex conjugate to be returned as a function. We
just need the function itself to have as an attribute the strong convexity
parameter of its convex conjugate.
…On Fri, 29 Oct 2021, 09:08 Vaggelis Papoutsellis, ***@***.***> wrote:
That was my initial plan, but we decided to not follow this route.
Basically, we cannot do it atm. As the convex_conjugate returns a number
and not a function.
In this example
<https://gist.github.com/epapoutsellis/f127dcd497b540e06ca3cfb3f6c32242#file-pdhg_strongly_convex_tomography-py-L84>,
we use the strongly convexity case of PDHG, when the conjugate of the
BlockFunction , i.e., separable function is strongly convex. That is the
sum of the convex_conjugate of L2NormSquared and the convex_conjugate of
the MixedL21Norm.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1030 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACMDSCDARP5EACG36N6WX33UJJJA3ANCNFSM5GYD4HFQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
@gfardell I prefer to have the methods |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the example code is my biggest issue. It's unmaintainable as it is, and worse than having the code snippets in __main__
as we used to. Can the examples be restricted to how to set up, modify and use the class. We have CIL demos for full CIL usage.
I don't think they'll know the difference between |
I removed the CIL/Wrappers/Python/cil/optimisation/algorithms/PDHG.py Lines 128 to 146 in deadd5b
|
|
||
#check if sigma, tau are None | ||
pdhg = PDHG(f=f, g=g, operator=operator, max_iteration=10) | ||
self.assertEqual(pdhg.sigma, 1./operator.norm()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use assertAlmostEqual
|
||
self.set_gamma_g(kwargs.get('gamma_g', None)) | ||
self.set_gamma_fconj(kwargs.get('gamma_fconj', None)) | ||
if self.gamma_g is not None and self.gamma_fconj is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move this check into the setter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few minor changes
This PR adds primal OR dual acceleration for the PDHG algorithm in the case where the function
f^{*}
org
is strongly convex. These are**kwargs**
arguments in thePDHG
algorithm,i.e.,gamma_g
for the strongly convex constant of the functiong
(primal acceleration) andgamma_fconj
for the strongly convex constant of the convex conjugate off
(dual acceleration). In particular:Numbers
.update_step_sizes
which is called at the end of theupdate
method. Therefore, the step-sizes change in every iteration, according to the specific case,f^{*}
org
being strongly convex . Ifgamma_g
orgamma_fconj
areNone
, the step-sizes remain unchanged.set_step_sizes
to check the correct type of the primal-dual step-sizes,sigma
andtau
and set the default scalar values.sigma
,tau
are positiveNumbers
, if they are array-like objects with the correct shape determined by theoperator
. If the user passessigma
andtau
with the correct shape, there is no guarantee that the algorithm will converge unless it follows Lemma 2.- If the shape is correct, the PDHG algorithm will not run, unlessFixed by axpby blackdatacontainer fix #1080 .use_axpby=False
.Note The adaptive rule for primal or dual acceleration is described in Algorithm 2.
A denoising example can be found here where the strong convexity property comes from the function
L2NormSquared
.A tomography example can be found here where the strong convexity property comes from the convex conjugate of the
BlockFunction
f. Not that theProjectionOperator
is withdevice=cpu
to avoid any mismatch on the adjoint.