Open
Description
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [128, 128, 1, 1]] is at version 4; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Metadata
Metadata
Assignees
Labels
Projects
Milestone
Relationships
Development
No branches or pull requests
Activity
inakierregueab commentedon Nov 12, 2024
Initial error was: Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
KevinMusgrave commentedon Nov 15, 2024
Thanks for the bug report. Can you provide a minimal amount of code that we can run to reproduce the error?