Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix docs issues #290

Merged
merged 4 commits into from
Feb 5, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions docs/code/modules.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,11 @@ Decoders
Bahdanau
Gumbel

:hidden:`DecoderBase`
~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: texar.torch.modules.DecoderBase
:members:

:hidden:`RNNDecoderBase`
~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: texar.torch.modules.RNNDecoderBase
Expand Down
2 changes: 1 addition & 1 deletion examples/gpt-2/prepare_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
help="The output directory where the pickle files will be generated. "
"By default it is set to be the same as `--data-dir`.")
parser.add_argument(
"--pretrained-model-name", type=str, default="gpt2-small",
'--pretrained-model-name', type=str, default='gpt2-small',
choices=tx.modules.GPT2Decoder.available_checkpoints(),
help="Name of the pre-trained checkpoint to load.")
parser.add_argument(
Expand Down
9 changes: 7 additions & 2 deletions texar/torch/modules/decoders/rnn_decoder_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,14 +107,15 @@ def forward(self, # type: ignore
<https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode>`_.

See Also:
Arguments of :meth:`create_helper`.
Arguments of :meth:`create_helper`, for arguments like
:attr:`decoding_strategy`.

Args:
inputs (optional): Input tensors for teacher forcing decoding.
Used when :attr:`decoding_strategy` is set to
``"train_greedy"``, or when `hparams`-configured helper is used.

The attr:`inputs` is a :tensor:`LongTensor` used as index to
The :attr:`inputs` is a :tensor:`LongTensor` used as index to
look up embeddings and feed in the decoder. For example, if
:attr:`embedder` is an instance of
:class:`~texar.torch.modules.WordEmbedder`, then :attr:`inputs`
Expand Down Expand Up @@ -143,6 +144,10 @@ def forward(self, # type: ignore
that defines the decoding strategy. If given,
``decoding_strategy`` and helper configurations in
:attr:`hparams` are ignored.

:meth:`create_helper` can be used to create some of the common
helpers for, e.g., teacher-forcing decoding, greedy decoding,
sample decoding, etc.
infer_mode (optional): If not `None`, overrides mode given by
`self.training`.
**kwargs: Other keyword arguments for constructing helpers
Expand Down
19 changes: 19 additions & 0 deletions texar/torch/modules/decoders/transformer_decoders.py
Original file line number Diff line number Diff line change
Expand Up @@ -451,6 +451,25 @@ def forward(self, # type: ignore
:attr:`hparams` are ignored.
infer_mode (optional): If not `None`, overrides mode given by
:attr:`self.training`.
**kwargs (optional, dict): Other keyword arguments. Typically ones
such as:

- **start_tokens**: A :tensor:`LongTensor` of shape
``[batch_size]``, the start tokens.
Used when :attr:`decoding_strategy` is ``"infer_greedy"`` or
``"infer_sample"`` or when :attr:`beam_search` is set.
Ignored when :attr:`context` is set.

When used with the Texar data module, to get ``batch_size``
samples where ``batch_size`` is changing according to the
data module, this can be set as
:python:`start_tokens=torch.full_like(batch['length'],
bos_token_id)`.

- **end_token**: An integer or 0D :tensor:`LongTensor`, the
token that marks the end of decoding.
Used when :attr:`decoding_strategy` is ``"infer_greedy"`` or
``"infer_sample"``, or when :attr:`beam_search` is set.

Returns:

Expand Down