Skip to content

Commit 60c872d

Browse files
authored
[Doc] Fix small typo in Transformers fallback (#14791)
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
1 parent 3fb17d2 commit 60c872d

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/source/models/supported_models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ class MyAttention(nn.Module):
101101
102102
def forward(self, hidden_states, **kwargs): # <- kwargs are required
103103
...
104-
attention_interface = attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
104+
attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
105105
attn_output, attn_weights = attention_interface(
106106
self,
107107
query_states,

0 commit comments

Comments
 (0)