Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add warning to a known autograd issue on XLA backend. #35449

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions torch/csrc/autograd/variable.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -384,6 +384,20 @@ unsigned VariableHooks::_register_hook(const Tensor& self, std::function<Tensor(
}

void handle_view_on_rebase(DifferentiableViewMeta* diff_view_meta, bool indirect) {
// TODO: Remove this warning once we allow XLA to workaround CopySlices.
if (diff_view_meta->base_.device().type() == c10::DeviceType::XLA) {
std::string msg;
if (indirect) {
msg = "This view requires gradients but its base or another view of the same base has been modified inplace. ";
} else {
msg = "This view requires gradients and it's being modified inplace. ";
}
msg = c10::str(msg, "Backward through inplace update on view tensors is WIP for XLA backwend. "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

backwend -> backend.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha nice catch! Will send a fix!

"Gradient might be wrong in certain cases. Running forward alone is fine. "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this prints in the forward? It's not clear what "alone is fine" means.

Is there anyway to 'tag' this and only print the warning in the backward?

Putting an example of this triggering would be helpful to see the effect.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes this prints in the forward when the inplace is done and only if inputs require gradients.
I think we don't want to delay the error as it would make it much harder for the user to find the faulty inplace.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We generally don't warn in the forward if it will cause a problem in the backward, right?

This seems like a good place for a mechanism you can turn on that warns/errors in the forward and the error in the backward tells you to turn it on.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested rewording:

Running a backward pass through an inplace update on view tensors is a WIP for the XLA backend and may result in incorrect gradient computation in certain cases. Note this warning is being triggered on the inplace update (not the corresponding backward pass), and this update is safe if a backward pass is not run. To work around this limitation and to silence this warning, please replace the inplace operation by the corresponding out-of-place operation."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @gchanan ! It has been updated in #35543.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For these, we actually warn/error in the forward because we know that the backward will be wrong. Note that if you Tensor does not require gradients, then this won't be called. So people that do not use the autograd won't see it.

"To work around it, please replace the inplace operation by an out-of-place one.");
TORCH_WARN(msg);
}

/// See NOTE [ View + Inplace detection ] for justification of the logic below
if (diff_view_meta->creation_meta != CreationMeta::DEFAULT) {
auto grad_fn = diff_view_meta->grad_fn_.get();
Expand Down