-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update transformers dependency to latest transformers==4.0.0 #107
Comments
Great! Looking forward to the PR! |
@MXueguang bump it up to 4 maybe? Transformers v4.0.0-rc-1 is out and has breaking changes (possibly breaking T5 results, but in a good way?) |
Hey @ronakice, but it is still a release candidate, right? I mean, it might be unstable for a while... |
I don't think we should really worry about that much, I don't think that will cause too many issues with transformers since it is always kinda a work in progress. Will be easier moving from release candidate to final version anyway (I assume by the time we are done with it we'll be having v4)! |
Ok, SGTM! |
close, see #118 |
current dependency transformers==2.10.0 is a bit outdated.
updating to transformers==3.4.0
conflicts are fixed already.
will create PR when monoT5 & monoBert's results are replicated on my end.
following warnings may need to be considered to update as well
max_length
is provided a specific value, please usetruncation=True
to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy totruncation
.pad_to_max_length
argument is deprecated and will be removed in a future version, usepadding=True
orpadding='longest'
to pad to the longest sequence in the batch, or usepadding='max_length'
to pad to a max length. In this case, you can give a specific length withmax_length
(e.g.max_length=45
) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).warnings.warn(
</s>
. In future versions this behavior may lead to duplicated eos tokens being added.The text was updated successfully, but these errors were encountered: