-
Notifications
You must be signed in to change notification settings - Fork 27.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TestMarian_MT_EN::test_batch_generation_mt_en
Failing due to randomly generated tokens
#12647
Comments
Traced back to this commit: 184ef8e I suspect there is a difference between the upload TF and PT checkpoints |
It seems there's a single difference in the final logits bias: import torch
from transformers import MarianMTModel
pt_model = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-mt-en")
tf_model = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-mt-en", from_tf=True)
pt, tf = pt_model.state_dict(), tf_model.state_dict()
ptf = {}
for key, value in pt.items():
ptf[key] = [value]
for key, value in tf.items():
if key not in ptf:
print(key, "not in ptf")
else:
ptf[key].append(value)
for key, value in ptf.items():
_pt, _tf = value
difference = torch.max(torch.abs(_pt - _tf)).tolist()
if difference > 0:
print(key, difference)
# final_logits_bias 10.176068305969238 Seems systematic, independent of runtime or seed. |
I would say the error comes from the TF checkpoint on the hub, looking forward to your input @patrickvonplaten and @patil-suraj. I'll deactivate the test in the meantime. |
This is also the case for the # final_logits_bias 8.724637031555176 |
And for the
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
The test fails with the following:
The text was updated successfully, but these errors were encountered: