-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add SPSA optimization method #357
Comments
Sounds like a nice contribution, do you want to take a stab at it? |
@mtthss if no one is working on it, I would like to try. Can you provide me with some general points before I start, like which files to update, what to take care of etc., as I haven't contributed to optax before. |
Is there any updates on this? I have previously worked with SPSA in TF (tensorflow/quantum#653) and would be interested in working on this but don't want to do redundant labor. |
Hi @lockwo, are you still interested? If you can implement SPSA, it'll be of great help! |
@ankit27kh : since there hasn't been activity for this in a year, I think it's safe for you to take over. if you end up contributing this example, please do so to the contrib/ directory. Thanks! |
@fabianp I've created the following implementation of a pseudo-gradient estimator: https://gist.github.com/carlosgmartin/0ee29182a17b35baf7d402ebdc797486 As noted in the function's docstring:
I'd be happy to contribute this implementation to Optax. I could put it under I also welcome any feedback on the code. Note that this pseudo-gradient can be used in combination with any existing Optax optimizer: Its only role is to determine the gradient that is fed into the optimizer. Thus it acts as a replacement or analogue for |
It would also be nice to have helper utility functions for estimation of the gradient via forward and central finite differences: https://gist.github.com/carlosgmartin/a147b43f39633dcb0a985b51a5b1af0c I'd be happy to contribute these as well. |
Thanks for looking into this @carlosgmartin. About the forward/central difference schemes, similar discussions have happened in JAX (see e.g. jax-ml/jax#15425). It seems that other users have expressed similar needs. If some libraries already propose such tools maybe it would be better to use those rather than reinventing them. (and maybe also check whether JAX ended up having such module). Thanks again @carlosgmartin |
The Simultaneous Perturbation Stochastic Approximation (SPSA) optimisation method is a faster optimisation method.
It is also naturally suited for noisy measurements. Thus, it will be useful when simulating noisy systems.
The theory (and implementation) for SPSA is:
Furthermore, it is implemented:
noisyopt
package, specifically see the source code here;More information:
https://www.jhuapl.edu/SPSA/
The text was updated successfully, but these errors were encountered: