Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

erors #17

Open
Zachysaurs opened this issue May 27, 2024 · 5 comments
Open

erors #17

Zachysaurs opened this issue May 27, 2024 · 5 comments

Comments

@Zachysaurs
Copy link

it run fine first time you use it after launching and give this error
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Initial Free Memory: 2.40 GB
Using cuda for inference.
Generating audio spectrogram...
Length of mel chunks: 214
Getting face landmarks...
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
W0000 00:00:1716803017.314071 10184 face_landmarker_graph.cc:174] Sets FaceBlendshapesGraph acceleration to xnnpack by default.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
W0000 00:00:1716803017.324200 8036 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1716803017.334851 12468 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1716803017.343476 11628 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\google\protobuf\symbol_database.py:55: UserWarning: SymbolDatabase.GetPrototype() is deprecated. Please use message_factory.GetMessageClass() instead. SymbolDatabase.GetPrototype() will be removed soon.
warnings.warn('SymbolDatabase.GetPrototype() is deprecated. Please '
Extracting face from image...
Warping, cropping and aligning face...
Generating data for inference...
Loading wav2lip checkpoint from: D:\lip-wise\Lip_Wise-main\weights\wav2lip\wav2lip.pth
Processing.....
D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\conv.py:456: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ..\aten\src\ATen\native\cudnn\Conv_v8.cpp:919.)
return F.conv2d(input, weight, bias, self.stride,
ffmpeg version 6.0-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
libavutil 58. 2.100 / 58. 2.100
libavcodec 60. 3.100 / 60. 3.100
libavformat 60. 3.100 / 60. 3.100
libavdevice 60. 1.100 / 60. 1.100
libavfilter 9. 3.100 / 9. 3.100
libswscale 7. 1.100 / 7. 1.100
libswresample 4. 10.100 / 4. 10.100
libpostproc 57. 1.100 / 57. 1.100
C:\Users\ggrov\AppData\Local\Temp\gradio\e3df33ef8d3fe798da7e5b605be4428fa6572504\New: No such file or directory

@Zachysaurs
Copy link
Author

second time you cant use it . i get this error
Initial Free Memory: 1.44 GB
Using cpu for inference.
Generating audio spectrogram...
Length of mel chunks: 214
Getting face landmarks...
W0000 00:00:1716803038.496757 10184 face_landmarker_graph.cc:174] Sets FaceBlendshapesGraph acceleration to xnnpack by default.
W0000 00:00:1716803038.505613 11764 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1716803038.516637 9652 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1716803038.525186 12856 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\google\protobuf\symbol_database.py:55: UserWarning: SymbolDatabase.GetPrototype() is deprecated. Please use message_factory.GetMessageClass() instead. SymbolDatabase.GetPrototype() will be removed soon.
warnings.warn('SymbolDatabase.GetPrototype() is deprecated. Please '
Extracting face from image...
Warping, cropping and aligning face...
Generating data for inference...
Loading wav2lip checkpoint from: D:\lip-wise\Lip_Wise-main\weights\wav2lip\wav2lip.pth
Processing.....
Traceback (most recent call last):
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\gradio\queueing.py", line 528, in process_events
response = await route_utils.call_process_api(
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\gradio\route_utils.py", line 270, in call_process_api
output = await app.get_blocks().process_api(
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\gradio\blocks.py", line 1908, in process_api
result = await self.call_function(
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\gradio\blocks.py", line 1485, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\anyio_backends_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\anyio_backends_asyncio.py", line 859, in run
result = context.run(func, *args)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\gradio\utils.py", line 808, in wrapper
response = f(*args, **kwargs)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\lip-wise\Lip_Wise-main\infer.py", line 146, in infer_image
dubbed_faces = w2l_model(mel_batch, img_batch)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "D:\lip-wise\Lip_Wise-main\models\wav2lip.py", line 98, in forward
audio_embedding = self.audio_encoder(audio_sequences) # B, 512, 1, 1
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "D:\lip-wise\Lip_Wise-main\models\conv.py", line 18, in forward
out = self.conv_block(x)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\lip-wise\Lip_Wise-main.lip-wise\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

@pawansharmaaaa
Copy link
Owner

I'd recommend a minimum vram of 4GB.

@Zachysaurs
Copy link
Author

what about that result file issue

@pawansharmaaaa
Copy link
Owner

When you upload media to Gradio, please wait a bit before starting the processing and also make sure that file name does not contain any special characters or space. I can also see some errors with Cudnn, so even if you don't get an No such file or directory error, lip-wise still won't run on low VRAM.

@Zachysaurs
Copy link
Author

low ram is one issue but when it run first time it should atleast give you result and is there anyway to change location of result file within main folder

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants