-
Notifications
You must be signed in to change notification settings - Fork 8.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transcribe on GPU #2329
base: main
Are you sure you want to change the base?
Transcribe on GPU #2329
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
benchmarks ? |
I am not sure that this is a valuable change. While it is not a robust benchmark, I did do an experiment on my local machine.
Machine specs:
Note: The audio takes ~30s to load and ~330s to transcribe, so the difference of one or two seconds seems largely moot regardless. |
What is important is that the device specified in |
@take0x, I was using my GPU to transcribe.
The device specified is used to transcribe. The I think most consumers of the code would say "the fastest device available should be used to create the mel spectrogram." Given the nature of the computation, a CPU's almost always going to be the faster device (and should therefore be the default), despite the device on which the NN (a very different computation) runs. You're more likely to get a PR approved if it included an optional |
@kittsil In my case, when transcribing large amounts of audio data, there have been cases where the process crashed on the CPU but could be processed normally on the GPU. I think it would be useful to be able to transcribe using devices other than a CPU. I'll try adding the |
Idealy,
log_mel_spectrogram()
should usemodel.device
when transcribing.