-
Notifications
You must be signed in to change notification settings - Fork 27.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running inference from ASR documentation, pipeline errors with "Can't load tokenizer" #23188
Comments
Your code works fine for me on macOS (I tried with the main branch of Transformers, which is version 4.29.0.dev0). It also looks like the Are you sure you don't have a |
The problem happens even if I delete the local directory. So the problem appears to be that there is a missing step in the docs:
Without this, there is no The reason If you look at |
It does look like those instructions are missing from the docs, I'll ping someone from the docs team to have a look. Thanks for reporting! |
Possibly related: #23222 |
Thanks for reporting this! If you pass |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
System Info
transformers
version: 4.28.1Who can help?
@Narsil
@sgugger
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Put together the script from Automatic Speech Recognition into a file
main.py
, up to but not including Inference.Run under Windows. Training succeeds.
Put together the Inference section into a file
infer.py
.Run under Windows.
Output:
main.py.zip
infer.py.zip
Expected behavior
No error.
The text was updated successfully, but these errors were encountered: