-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ReverseDiff documentation shows issue that has been fixed? Nested differentiation of a closure? #222
Comments
The issue is very much alive. For a minimum working example, take:
I would advise against using ReverseDiff for this kind of stuff. I have found myself in exactly the same situation as you, and I'm completely clueless as to why it seems to work with destructured Flux models. Doing some experimentation with ForwardDiff (which can actually do this safely) I've found that the gradients from ReverseDiff are slightly off. This is probably due to infinitesimals propagating improperly. |
Hello everyone. The ReverseDiff documentation under "Limitations of ReverseDiff" lists the following
Nested differentiation of closures is dangerous. Differentiating closures is safe, and nested differentation is safe, but you might be vulnerable to a subtle bug if you try to do both. See this ForwardDiff issue for details. A fix is currently being planned for this problem.
But it appears as the listed ForwardDiff issue has been resolved. I ran this simple test
Which results in the correct value of 1. It seems as though this issue has been fixed?
As a bit of background, I have been working hard to use a hessian in a Flux loss function (which requires me to take the gradient of this hessian wrt a NN model's weights), but Zygote has a large number of issues with nested differentiation at the moment (not docking them at all, its an amazing development). It appears as though I am able to use ReverseDiff to correctly take the gradient of the hessian wrt the weights using the Flux.destructure function
I am able to use these gradients to train the model, and it appears to be working. I'm worried that the gradients may be only slightly incorrect, but close enough for the optimizer to still work. If someone could comment on the status of this issue, I would greatly appreciate it.
Thanks in advance!
The text was updated successfully, but these errors were encountered: