Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add simple wrapper for beam search. #675

Open
wants to merge 11 commits into
base: dev-static
Choose a base branch
from

Conversation

pkuyym
Copy link
Contributor

@pkuyym pkuyym commented Mar 1, 2018

Provide a simple seq2seq model.
Fix part of #674.

Provide a simple seq2seq model.
@pkuyym pkuyym changed the title Add simple wrapper for beam search. [WIP] Add simple wrapper for beam search. Mar 1, 2018
@lcy-seso
Copy link
Collaborator

It seems that this PR may potentially conflict with this #729 . Has work in this PR already finished so we cant begin reviewing it now? Thank you for this work.

@pkuyym
Copy link
Contributor Author

pkuyym commented Mar 20, 2018

@lcy-seso This work is almost finished except some refinement and cleaning. However, the api for beam search is ready to review, please feel free to do it.

@pkuyym pkuyym requested a review from lcy-seso March 20, 2018 01:55
@lcy-seso
Copy link
Collaborator

I see. Thank you.

@pkuyym pkuyym changed the title [WIP] Add simple wrapper for beam search. Add simple wrapper for beam search. Mar 20, 2018
@pkuyym pkuyym requested a review from panyx0718 March 20, 2018 05:37
Copy link
Contributor

@panyx0718 panyx0718 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

High level comments: Please add a lot of comments, especially to all public methods. For those that don't need to be exposed, please make them private

"(default: %(default)d)")


def lstm_step(x_t, hidden_t_prev, cell_t_prev, size):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you need implement a separate lstm here? If true, make it private?


forget_gate = fluid.layers.sigmoid(x=linear([hidden_t_prev, x_t]))
input_gate = fluid.layers.sigmoid(x=linear([hidden_t_prev, x_t]))
output_gate = fluid.layers.sigmoid(x=linear([hidden_t_prev, x_t]))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are all these gate the same? I remember in TF's LSTM implementation, you don't need to do 3 separate fc.

return translation_ids, translation_scores, feeding_list


def to_lodtensor(data, place, dtype='int64'):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this become a general library?

return lod_t, lod[-1]


def lodtensor_to_ndarray(lod_tensor):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this become a general library? Need comments?

@@ -0,0 +1,458 @@
"""seq2seq model for fluid."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me, this seems to be a example? Perhaps the main file should live in model zoo or examples or tests? And some general utility methods can live here?



if __name__ == '__main__':
#train_main()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flag here?

@@ -0,0 +1,245 @@
# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserve.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this file live in example or tests or model zoo?

return self._need_reorder


class MemoryState(object):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this and many others be private? _MemoryState

self._counter.stop_gradient = True

# write initial state
block.append_op(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this a append_op not a layers.write_arrray?

self._switched_decoder = False


class TrainingDecoder(object):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need comments

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants