Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/while op sentiment analysis #6282

Merged

Conversation

reyoung
Copy link
Collaborator

@reyoung reyoung commented Dec 5, 2017

fixes #5861

@reyoung reyoung requested a review from QiJune December 5, 2017 07:15
class DynamicRNN(object):
BEFORE_RNN = 0
IN_RNN = 1
AFTER_RNN = 2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, can we use nested dynamic RNN? Should the status be IN_RNN and OUT_RNN?

type='max_sequence_len',
inputs={'RankTable': self.lod_rank_table},
outputs={"Out": self.max_seq_len})
self.cond.stop_gradient = True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why stop_gradient here is True

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is no need to calculate gradient of less than

self.output_array = []
self.outputs = []
self.cond = self.helper.create_tmp_variable(dtype='bool')
self.cond.stop_gradient = False
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am little confused about these two concepts, stop_gradient and no_gradient. Are these two the same?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no_grad_set is a parameter of backward(...) method. stop_gradient is an attribute of Python Variable.

The variables whose stop_gradient=True will be passed to backward(no_grad_set=...)

self.step_idx.stop_gradient = False
self.status = DynamicRNN.IN_RNN
with self.while_op.block():
yield
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why yield here? Not before line 1922

Copy link
Member

@QiJune QiJune left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

rnn.update_memory(mem, out_)
rnn.output(out_)

last = fluid.layers.sequence_pool(input=rnn(), pool_type='last')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rnn() 改成 rnn.out 是不是更容易理解点

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@reyoung reyoung merged commit 229c2e7 into PaddlePaddle:develop Dec 6, 2017
@reyoung reyoung deleted the feature/while_op_sentiment_analysis branch December 26, 2017 09:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Dynamic RNN API
3 participants