-
Notifications
You must be signed in to change notification settings - Fork 27.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🚨🚨[Whisper Tok] Update integration test #29368
🚨🚨[Whisper Tok] Update integration test #29368
Conversation
|
||
self.assertListEqual( | ||
tokenizer.convert_tokens_to_ids(tokens), | ||
[5723, 307, 257, 220, 31636], | ||
[5723, 307, 257, 1500], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This now gives equivalent results to the original:
from whisper.tokenizer import get_tokenizer
tokenizer = get_tokenizer(True)
tokens = tokenizer.encode("This is a test")
print(tokens)
Print Output:
[5723, 307, 257, 1500]
@@ -499,25 +499,3 @@ def test_offset_decoding(self): | |||
|
|||
output = multilingual_tokenizer.decode(INPUT_TOKENS, output_offsets=True)["offsets"] | |||
self.assertEqual(output, []) | |||
|
|||
@require_jinja | |||
def test_tokenization_for_chat(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chat template doesn't make sense for Whisper (a speech recognition model) - have removed the test to keep the CI lightweight (cc @Rocketknight1)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine with me!
Also cc @ydshieh as this PR will prevent a red CI on |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the prompt fix, it's breaking so I'll probably update the PR tittle with
The GH PR itself is not strictly breaking (there's no change to the code), but rather it's the Hub PR which is breaking. Fine for me to leave the 🚨 in the title though to book-log this! |
* [Whisper Tok] Update integration test * make style
What does this PR do?
The merges for the Whisper tokenizers were updated on the Hub in this PR. While this is a breaking change, it is a required fix to ensure we have parity with the original OpenAI repo.
This PR updates the integration tests for the Whisper tokenizer to reflect the merge changes.