-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why function sum
can change the dtype of ndl.Tensor
#11
Comments
I also came across this problem and I may have a clue why this happen. First, Numpy will apply type promotion to decide the result type. The rules can be found here. Second, the (Pdb) node_grads[0].dtype
dtype('float32')
(Pdb) np.result_type(node_grads[0] + 0)
dtype('float64') I also found that Numpy's promotion rules sometimes make my scalar ops(i.e. |
That's true. In softmaxloss computation, I use code snipet like:
can produce |
The divScalar function can be implemented by explicitly calling |
I met the following error when testing sgd
Then I found 1 line in the function
compute_gradient_of_variables
will cause this errorI change it and things go right
The following dtype in pdb is wired. Maybe I was wrong.
The text was updated successfully, but these errors were encountered: