Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stuck on load_model() #656

Closed
dgoryeo opened this issue Jan 4, 2024 · 4 comments
Closed

Stuck on load_model() #656

dgoryeo opened this issue Jan 4, 2024 · 4 comments

Comments

@dgoryeo
Copy link

dgoryeo commented Jan 4, 2024

It appears that whipserX has stopped working on Google Colab. The code does not pass beyond load_model(). Here is my code:


import whisperx
import gc 

device = "cuda" 
audio_file = "/content/drive/MyDrive/0.original.S01E04-sc4.wav3"
batch_size = 16
compute_type = "float16" 

# 1. Transcribe with original whisper (batched)
model = whisperx.load_model("large-v2", device, compute_type=compute_type)
audio = whisperx.load_audio(audio_file)
result = model.transcribe(audio, batch_size=batch_size)
print(result["segments"]) # before alignment

My installation environment is:
!pip install --no-cache-dir torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1 torchtext torchdata --index-url https://download.pytorch.org/whl/cu118

Colab execution notification on the line [model = whisperx.load_model("large-v2", device, compute_type=compute_type) ]that it is stuck on:


Executing (12m 35s)   <cell line: 10>
navigate_next
 load_model()
navigate_next
 __init__()
navigate_next
 download_model()
navigate_next
 _inner_fn()
navigate_next
 snapshot_download()
navigate_next
 _inner_fn()
navigate_next
 repo_info()
navigate_next
 _inner_fn()
navigate_next
 model_info()
navigate_next
 _build_hf_headers()
navigate_next
 _inner_fn()
navigate_next
 build_hf_headers()
navigate_next
 get_token_to_send()
navigate_next
 get_token()
navigate_next
 _get_token_from_google_colab()
navigate_next
 get()
navigate_next
 blocking_request()
navigate_next
 read_reply_from_input()


I think this happend after a recent upgrade of Google Colab: Upgrade to Colab

@anjehub
Copy link

anjehub commented Jan 4, 2024

I'm experiencing the same issue, it just gets stuck 'forever' on the whisperx.load_model("large-v2", device, compute_type=compute_type) line.

My installation environment:

!pip install torch==2.0.0 torchaudio==2.0.1
!pip install git+https://github.com/m-bain/whisperx.git

@iUnknownAdorn
Copy link

iUnknownAdorn commented Jan 5, 2024

+1 - facing the same issue

Quick fix:

from huggingface_hub.utils import _runtime
_runtime._is_google_colab = False

Don't know if something else will break if you are doing something colab dependent, but you can always turn on the flag after model is downloaded.

Related: huggingface/huggingface_hub#1952

@vietanhlampartvn
Copy link

@iUnknownAdorn

Quick fix:

from huggingface_hub.utils import _runtime
_runtime._is_google_colab = False

It worked, thank you very much👼👼👼

@dgoryeo
Copy link
Author

dgoryeo commented Jan 5, 2024

@iUnknownAdorn , thanks so much for the fix. It worked like a charm and no other issues so far.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants