-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stuck on load_model() #656
Comments
I'm experiencing the same issue, it just gets stuck 'forever' on the My installation environment:
|
+1 - facing the same issue Quick fix:
Don't know if something else will break if you are doing something colab dependent, but you can always turn on the flag after model is downloaded. Related: huggingface/huggingface_hub#1952 |
It worked, thank you very much👼👼👼 |
@iUnknownAdorn , thanks so much for the fix. It worked like a charm and no other issues so far. |
It appears that whipserX has stopped working on Google Colab. The code does not pass beyond load_model(). Here is my code:
My installation environment is:
!pip install --no-cache-dir torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1 torchtext torchdata --index-url https://download.pytorch.org/whl/cu118
Colab execution notification on the line [model = whisperx.load_model("large-v2", device, compute_type=compute_type) ]that it is stuck on:
I think this happend after a recent upgrade of Google Colab: Upgrade to Colab
The text was updated successfully, but these errors were encountered: