Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compare the gradient consistency between GPU and CPU calculations #3476

Merged
merged 8 commits into from
Aug 17, 2017

Conversation

qingqing01
Copy link
Contributor

@qingqing01 qingqing01 commented Aug 14, 2017

Fix #3478

  1. Compare the gradient consistency between GPU and CPU calculations in gradient_checker.py.
    And refine gradient_checker.py.
  2. Move the unit test in gradient_checker.py into test_gradient_checker.py.
  3. Add unit test for sigmoid backward implementation.

for name in numeric_grads:
b = numpy.array(scope.find_var(grad_var_name(name)).get_tensor())
a = numeric_grads[name]
def get_grad(self, forward_op, backward_op, input_vars, grad_names, place):
Copy link
Member

@jacquesqiao jacquesqiao Aug 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename to __get_gradient?
private method should name with __xxx
https://google.github.io/styleguide/pyguide.html?showone=Naming#Naming

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. use __get_gradient.

out_names = [item for k in outputs for item in outputs[k]]

# create input var and set value
for name, value in input_vars.iteritems():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These input_vars is numpy array but not paddle's Variable, is it better to name it input_python_vars

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename input_value, input_python_vars is a little long.

]
return outs

def compare_grad(self, forward_op, inputs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above __compare_gradient

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a private method. The user can call this method to compare the gradients for the backward op.

for name in numeric_grads:
b = numpy.array(scope.find_var(grad_var_name(name)).get_tensor())
a = numeric_grads[name]
def get_grad(self, forward_op, backward_op, input_vars, grad_names, place):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add some comment to these methods

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@qingqing01 qingqing01 merged commit c68bfc3 into PaddlePaddle:develop Aug 17, 2017
@qingqing01 qingqing01 deleted the bp_test branch March 7, 2018 12:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants