-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/while op sentiment analysis #6282
Feature/while op sentiment analysis #6282
Conversation
A v2 API like data feeder for book demos. We can feed data directly from reader.
…op_sentiment_analysis
…op_sentiment_analysis
…op_sentiment_analysis
class DynamicRNN(object): | ||
BEFORE_RNN = 0 | ||
IN_RNN = 1 | ||
AFTER_RNN = 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, can we use nested dynamic RNN? Should the status be IN_RNN and OUT_RNN?
type='max_sequence_len', | ||
inputs={'RankTable': self.lod_rank_table}, | ||
outputs={"Out": self.max_seq_len}) | ||
self.cond.stop_gradient = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why stop_gradient here is True
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is no need to calculate gradient of less than
self.output_array = [] | ||
self.outputs = [] | ||
self.cond = self.helper.create_tmp_variable(dtype='bool') | ||
self.cond.stop_gradient = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am little confused about these two concepts, stop_gradient and no_gradient. Are these two the same?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no_grad_set
is a parameter of backward(...)
method. stop_gradient is an attribute of Python Variable.
The variables whose stop_gradient=True will be passed to backward(no_grad_set=...)
self.step_idx.stop_gradient = False | ||
self.status = DynamicRNN.IN_RNN | ||
with self.while_op.block(): | ||
yield |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why yield here? Not before line 1922
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
rnn.update_memory(mem, out_) | ||
rnn.output(out_) | ||
|
||
last = fluid.layers.sequence_pool(input=rnn(), pool_type='last') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rnn() 改成 rnn.out 是不是更容易理解点
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just follow the API design on https://github.com/PaddlePaddle/talks/blob/develop/paddle-gtc-china.pdf
fixes #5861