-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
a problem with the backpropagation process #2
Comments
Thanks for your interest. Best Regards |
I also encounter the problem, do you solve it? Could you tell me the solutions? |
I met the same problem,do you solve it? |
|
Hello, I also have the above problem, I would like to ask if there is any solution? Thank you! |
The following error occurred when running the code:
File "/home/xxx/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/tensor.py", line 102, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/xxx/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/autograd/init.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
It seems that there is a problem with the backpropagation process. Has the author encountered such a problem? My pytorch version is 1.1.0, thank you!
The text was updated successfully, but these errors were encountered: