-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
jax.grad on fax.constrained.cga_ecp
solutions does not work
#6
Comments
Can you tell me a bit more about what it is that you are trying to do? There might be another way to compute what you need (after all, this is kind of the core idea behind most implementations in this package). For context, the functional control flow used in The current implementation doesn't explicitly define any special differentiation rules so Alternatively, you might be able to leverage some of the tools already implemented in Once I understand why you want to get the gradient of |
thanks @gehring for the prompt reply. I was hoping that using I'm investigating fixed-point / implicit models in order to find derivatives of solutions of optimization problems wrt problem parameters without differentiating through an optimization loop in the context of particle physics inference https://github.com/scikit-hep/pyhf The main setup is that a likelihood function I'd like to use in pyhf currently the If there is a way to express the above implicitly that would be great. |
hi @gehring, do you think something like the above would be feasible w/ fax? Do you need any more info? edit: seems like this gives me a good start |
Sorry for the delays in responding. Keep in mind that when we started One way to transform these subproblems into something If you can formulate the solution to your constrained optimization as a (possibly multi-dimensional) fixed point problem, However, this general way of handling implicit differentation might not always be the best thing to do. You ought to leverage domain knowledge when you can, e.g., re-using factorizations from the "forward" pass in the "backwards" pass (think of solving for 'x' in inv(A)x = y and its derivatives). I can point you to some examples/references if this is a direction you'd like to pursue, but, this is a bit outside the scope of Otherwise, assuming you can formulate the solution to your constrained optimization as an equality constraint, the second approach would be what we call competitive differentiation which solves incrementally both the outer-problem and the sub-problem at the same time using the competitive descent method applied to the Lagrangian. This would be the The advantage of this second approach is that you can start improving the "meta" parameters before having ever fully solved the sub-problem. This can lead to big improvements in some cases but it isn't quite clear yet when it is or isn't favorable over a two-phase method. It is probably a good idea to explore both and see which approach seems most promising for your particular use case. We've implemented some wrappers to help formulate the Lagrangian the way our implementation of You might find helpful to look through how we used it in a control/"reinforcement learning" setting. You can find the repo here. @f-t-s @pierrelux if you have a minute, I would appreciate if you could read through my blurb to make sure I didn't misrepresent anything or accidentally said something false or misleading! |
Yes! That looks like correct approach for using implicit differentation on the contractive fixed point case. This is the type of problem |
i'm not sure I caught the right twitter handles but this is what came out of this https://twitter.com/lukasheinrich_/status/1235622557295849473 thanks again for this library! we have some JIT'ing issues that might be interesting to the team: |
@lukasheinrich Happy we could help! Thanks for the mention! You have (now had?) the right handle but seeing it compelled me to update it to something more meaningful (now @ClementGehring) and that I'd be happy to use. I don't really use twitter but would there be a way to update the mention to my new handle. No worries if there isn't or if it's a hassle! |
Hello,
I'm trying to see whether I can use
fax
in order to find gradients of a fixed point function (an optimization problem) wrt to problem parametersconsider f(x,y) = -(x**2 + (y[0]-a[0])**2 + (y[1]-a[1])**2 and h(x,y) = x-2
but when I try to compute gradients, I get the following error
is there any way around this?
The text was updated successfully, but these errors were encountered: