Skip to content

Commit

Permalink
Merge pull request #2584 from whoisjones/GH-2582/constructor_arugments
Browse files Browse the repository at this point in the history
TransformerWordEmbeddings to process long sequences by default
  • Loading branch information
alanakbik authored Jan 5, 2022
2 parents a829734 + e0bd90f commit 3b1a27c
Showing 1 changed file with 7 additions and 1 deletion.
8 changes: 7 additions & 1 deletion flair/embeddings/token.py
Original file line number Diff line number Diff line change
Expand Up @@ -889,6 +889,7 @@ def __init__(
self,
model: str = "bert-base-uncased",
is_document_embedding: bool = False,
allow_long_sequences: bool = True,
**kwargs,
):
"""
Expand All @@ -902,7 +903,12 @@ def __init__(
:param fine_tune: If True, allows transformers to be fine-tuned during training
"""
TransformerEmbedding.__init__(
self, model=model, is_token_embedding=True, is_document_embedding=is_document_embedding, **kwargs
self,
model=model,
is_token_embedding=True,
is_document_embedding=is_document_embedding,
allow_long_sequences=allow_long_sequences,
**kwargs,
)

@classmethod
Expand Down

0 comments on commit 3b1a27c

Please sign in to comment.