-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
whisper : add support for new distilled Whisper models #1424
Conversation
Benched on M1 Pro, looks promising
|
Distil Whisper on HF model now provides GGML prebuilt (no need to convert?): |
* whisper : add support for new distilled Whisper models * whisper : print log when using distilled models
* whisper : add support for new distilled Whisper models * whisper : print log when using distilled models
* whisper : add support for new distilled Whisper models * whisper : print log when using distilled models
Apologies for resurrecting a merged PR @ggerganov, but whats the reasoning behind disabling timestamps if the model is distilled? |
AFAIK distilled models are not trained with timestamps, so the inference should not try to predict those |
I see. I find that interesting tho, as when I commented the line that disabled timestamps out, correct word level timestamps were generated by distil-whisper, and were accurate. |
I downloaded the latest distilled model:
But when running this model using:
I don't see the message |
* whisper : add support for new distilled Whisper models * whisper : print log when using distilled models
ref #1414
Initial support for https://huggingface.co/distil-whisper
Currently, the chunk-based transcription strategy is not implemented, so there can be sub-optimal quality when using the distilled models with
whisper.cpp
.Run the transcription as usual: