Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Expected isFloatingType(grads[i].type().scalarType()) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.) #5

Closed
hova88 opened this issue Jun 16, 2020 · 1 comment

Comments

@hova88
Copy link

hova88 commented Jun 16, 2020

我在python train.py的时候,发生这个报错

File "train.py", line 375, in
train(start_epoch)
File "train.py", line 352, in train
train_one_epoch()
File "train.py", line 247, in train_one_epoch
loss.backward()
File "/home/hova/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 166, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/hova/anaconda3/lib/python3.7/site-packages/torch/autograd/init.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Expected isFloatingType(grads[i].type().scalarType()) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)

哥哥救我Orz

@hova88
Copy link
Author

hova88 commented Jun 17, 2020

✅[Solved]

THE SAME ISSUE AS THIS

The pointnet2/* applied here is an earlier version.
Add ctx.mark_non_differentiable(new_ind) at the end of forward function of FurthestPointSampling in pointnet2/pointnet2_utils.py

OR replace the whole `pointnet2* folder with THIS

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant