Skip to content

Commit cbfb8d7

Browse files
d-kleinestevhliu
andauthored
doc: Clarify is_decoder usage in PretrainedConfig documentation (#36724)
* fix: clarify decoder usage in PretrainedConfig documentation * Apply suggestions from code review updated doc Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
1 parent ac1a1b6 commit cbfb8d7

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/transformers/configuration_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ class PretrainedConfig(PushToHubMixin):
109109
is_encoder_decoder (`bool`, *optional*, defaults to `False`):
110110
Whether the model is used as an encoder/decoder or not.
111111
is_decoder (`bool`, *optional*, defaults to `False`):
112-
Whether the model is used as decoder or not (in which case it's used as an encoder).
112+
Whether to only use the decoder in an encoder-decoder architecture, otherwise it has no effect on decoder-only or encoder-only architectures.
113113
cross_attention_hidden_size** (`bool`, *optional*):
114114
The hidden size of the cross-attention layer in case the model is used as a decoder in an encoder-decoder
115115
setting and the cross-attention hidden dimension differs from `self.config.hidden_size`.

0 commit comments

Comments
 (0)