-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expected isFloatingType(grads[i].type().scalarType()) #8
Comments
Dear @Gao-JT, I'm sorry I have no idea about this problem, you can keep this issue open to see if anyone else can give a solution. |
Dear @HaozheQi , I have solved this problem by following such method: |
Great, and hope our work can give you some inspiration. |
Hi, I have met the same problem, can you tell me how did you fixed this problem? |
Hello, I give you the solutions. Line 308 inds = _ext.ball_query(new_xyz, xyz, radius, nsample)
ctx.mark_non_differentiable(inds)
return inds replace And, Line 65 fps_inds = _ext.furthest_point_sampling(xyz, npoint)
ctx.mark_non_differentiable(fps_inds)
return fps_inds replace |
It works, thanks @cuge1995 |
why do you use _ext.ball_query() to replace _ext.ball_query_score(), they are not the same function. can it change the final results? |
Dear @HaozheQi ,
Thanks for your excellent work ! Now I am trying to reproduce the results through the code you provided, but I got this error:
Traceback (most recent call last):
File "/home/gjt/.pycharm_helpers/pydev/pydevd.py", line 1668, in
main()
File "/home/gjt/.pycharm_helpers/pydev/pydevd.py", line 1662, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/gjt/.pycharm_helpers/pydev/pydevd.py", line 1072, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/gjt/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/gjt/P2B/train_tracking.py", line 165, in
loss.backward()
File "/home/gjt/anaconda3/envs/P2B/lib/python3.6/site-packages/torch/tensor.py", line 166, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/gjt/anaconda3/envs/P2B/lib/python3.6/site-packages/torch/autograd/init.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Expected isFloatingType(grads[i].type().scalarType()) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
My environment is:
python 3.6.9
pytorch 1.3.1
torchvision 0.4.2
cudatoolkit 10.0.30
cudnn 7.6.5
h5py 2.10.0
numpy 1.17.4
pprint 0.1
enum34 1.1.10
future 0.18.2
pandas 0.25.3
shapely 1.7b1
matplotlib 3.1.2
pomegranate 0.13.3
ipykernel 5.1.3.0
jupyter 1.0.0
imageio 2.6.1
pyquaternion 0.9.5
Do you know what's wrong with it? Do you know what's wrong with it? Looking forward to hearing from you. Thanks for your excellent work again !
The text was updated successfully, but these errors were encountered: