Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Windows version not working #17

Open
nitinmukesh opened this issue Jul 20, 2024 · 16 comments
Open

Windows version not working #17

nitinmukesh opened this issue Jul 20, 2024 · 16 comments

Comments

@nitinmukesh
Copy link

I downloaded and extracted
https://drive.google.com/file/d/1ijqDlMAYqAVlqwqlXDpjBS5i3A6R_f7M/view?usp=sharing

activated virtual environment and
python app.py --mode onnx

No TensorRT Found
No PyCUDA Found
loading model: warping_spade
{'name': 'WarpingSpadeModel', 'predict_type': 'ort', 'model_path': './checkpoints/liveportrait_onnx/warping_spade.onnx'}
OnnxRuntime use ['CUDAExecutionProvider', 'CoreMLExecutionProvider', 'CPUExecutionProvider']
C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CoreMLExecutionProvider' is not in available provider names.Available providers: 'TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider'
  warnings.warn(
Traceback (most recent call last):
  File "C:\usable\FasterLivePortrait-windows\app.py", line 29, in <module>
    gradio_pipeline = GradioLivePortraitPipeline(infer_cfg)
  File "C:\usable\FasterLivePortrait-windows\src\pipelines\gradio_live_portrait_pipeline.py", line 24, in __init__
    super(GradioLivePortraitPipeline, self).__init__(cfg, **kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\pipelines\faster_live_portrait_pipeline.py", line 22, in __init__
    self.init(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\pipelines\faster_live_portrait_pipeline.py", line 25, in init
    self.init_models(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\pipelines\faster_live_portrait_pipeline.py", line 33, in init_models
    self.model_dict[model_name] = getattr(models, self.cfg.models[model_name]["name"])(
  File "C:\usable\FasterLivePortrait-windows\src\models\warping_spade_model.py", line 17, in __init__
    super(WarpingSpadeModel, self).__init__(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\models\base_model.py", line 20, in __init__
    self.predictor = get_predictor(**self.kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\models\predictor.py", line 241, in get_predictor
    return OnnxRuntimePredictorSingleton(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\models\predictor.py", line 233, in __new__
    OnnxRuntimePredictorSingleton._instance[model_path] = OnnxRuntimePredictor(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\models\predictor.py", line 178, in __init__
    self.onnx_model = onnxruntime.InferenceSession(model_path, providers=providers, sess_options=opts)
  File "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 452, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from ./checkpoints/liveportrait_onnx/warping_spade.onnx failed:Node (/dense_motion_network/GridSample) Op (GridSample) [ShapeInferenceError] Input 0 expected to have rank 4 but has rank 5
Exception ignored in: <function OnnxRuntimePredictor.__del__ at 0x000002C609535000>
Traceback (most recent call last):
  File "C:\usable\FasterLivePortrait-windows\src\models\predictor.py", line 215, in __del__
    del self.onnx_model
AttributeError: onnx_model
Exception ignored in: <function BaseModel.__del__ at 0x000002C609535360>
Traceback (most recent call last):
  File "C:\usable\FasterLivePortrait-windows\src\models\base_model.py", line 51, in __del__
    if self.predictor is not None:
AttributeError: 'WarpingSpadeModel' object has no attribute 'predictor'

It says "No PyCUDA Found" even though environment variable is set
image

nvcc --version also works

image
@nitinmukesh
Copy link
Author

nitinmukesh commented Jul 20, 2024

It is also failing to load models as shown in log

image

Pip list, just in case. I have not updated anything

(venv) C:\usable\FasterLivePortrait-windows>pip list
Package Version


onnx 1.16.1
onnxruntime-gpu 1.16.3
opencv-python 4.10.0.84
torch 2.1.2+cu121
torchvision 0.16.2+cu121

@nitinmukesh
Copy link
Author

So regarding this error

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from ./checkpoints/liveportrait_onnx/warping_spade.onnx failed:Node (/dense_motion_network/GridSample) Op (GridSample) [ShapeInferenceError] Input 0 expected to have rank 4 but has rank 5

microsoft/onnxruntime#18313

@warmshao
Copy link
Owner

Did you not successfully activate my virtual environment? I see that your version of onnxruntime is onnxruntime-gpu 1.16.3, but the virtual environment should be onnxruntime-gpu 1.17.0. Please note that if you are using PowerShell, the activation script for the environment should be .\venv\Scripts\activate.ps1.

@watch-activity
Copy link

Since venv environments will not work when moved, we recommend using the mobile Python Embeddable for Windows package.
https://www.python.org/downloads/windows/

Also, it worked if I changed the paths of pyvenv.cfg and activate.bat to suit my environment.
*No TensorRT Found / No PyCUDA Found displayed

@warmshao
Copy link
Owner

*No TensorRT Found / No PyCUDA Found displayed。It will not affect the normal operation of onnxruntime-gpu. Regarding the issue with the Python virtual environment, I will conduct further verification.

@nitinmukesh
Copy link
Author

nitinmukesh commented Jul 21, 2024

I extracted the package again, now it's showing onnxruntime-gpu 1.18.1

I am literally fed up with Microsoft. All of their packages are so difficult to install on Windows, be it onnxrutime or deepspeed.

Microsoft Windows [Version 10.0.22631.3880]
(c) Microsoft Corporation. All rights reserved.

C:\usable\FasterLivePortrait-windows>venv\Scripts\activate

(venv) C:\usable\FasterLivePortrait-windows>pip list
Package                      Version
---------------------------- ------------------
absl-py                      2.1.0
accelerate                   0.32.1
aiofiles                     23.2.1
aiohttp                      3.9.3
aiosignal                    1.3.1
albumentations               1.3.1
aliyun-python-sdk-core       2.15.1
aliyun-python-sdk-kms        2.16.2
altair                       5.3.0
annotated-types              0.7.0
antlr4-python3-runtime       4.9.3
anyio                        4.4.0
astunparse                   1.6.3
async-timeout                4.0.3
attrs                        23.2.0
av                           11.0.0
beautifulsoup4               4.12.3
black                        24.3.0
boltons                      23.1.1
braceexpand                  0.1.7
cachetools                   5.3.3
certifi                      2024.7.4
cffi                         1.16.0
charset-normalizer           3.3.2
chumpy                       0.71
click                        8.1.7
cloudpickle                  3.0.0
colorama                     0.4.6
coloredlogs                  15.0.1
contourpy                    1.2.1
crcmod                       1.7
cryptography                 42.0.5
cycler                       0.12.1
Cython                       3.0.8
decorator                    4.4.2
dill                         0.3.8
dnspython                    2.6.1
docopt                       0.6.2
easydict                     1.11
einops                       0.8.0
email_validator              2.2.0
exceptiongroup               1.2.2
fastapi                      0.111.1
fastapi-cli                  0.0.4
ffmpeg-python                0.2.0
ffmpy                        0.3.2
filelock                     3.15.4
fire                         0.6.0
flatbuffers                  24.3.25
fonttools                    4.53.1
freetype-py                  2.4.0
frozenlist                   1.4.1
fsspec                       2024.6.1
future                       1.0.0
fvcore                       0.1.5.post20221221
gast                         0.4.0
gdown                        5.1.0
google-auth                  2.29.0
google-auth-oauthlib         0.4.6
google-pasta                 0.2.0
gradio                       4.37.1
gradio_client                1.0.2
grpcio                       1.62.1
h11                          0.14.0
h5py                         3.11.0
hmr2                         0.0.0
httpcore                     1.0.5
httptools                    0.6.1
httpx                        0.27.0
huggingface-hub              0.24.0
humanfriendly                10.0
hydra-core                   1.3.2
idna                         3.7
imageio                      2.33.1
imageio-ffmpeg               0.4.9
importlib_metadata           7.1.0
importlib_resources          6.4.0
inquirerpy                   0.3.4
insightface                  0.7.3
intel-openmp                 2021.4.0
iopath                       0.1.9
jax                          0.4.26
Jinja2                       3.1.4
jmespath                     0.10.0
joblib                       1.3.2
Js2Py                        0.74
jsonschema                   4.23.0
jsonschema-specifications    2023.12.1
kiwisolver                   1.4.5
lazy_loader                  0.3
libclang                     18.1.1
lightning-utilities          0.11.2
Markdown                     3.6
markdown-it-py               3.0.0
MarkupSafe                   2.1.5
matplotlib                   3.9.1
mdurl                        0.1.2
mkl                          2021.4.0
ml-dtypes                    0.4.0
model-index                  0.1.11
moviepy                      1.0.3
mpmath                       1.3.0
multidict                    6.0.5
mypy-extensions              1.0.0
networkx                     3.2.1
numpy                        1.26.4
oauthlib                     3.2.2
omegaconf                    2.3.0
onnx                         1.16.1
onnxruntime-gpu              1.18.1

I extracted again but now it's different error.

(venv) C:\usable\FasterLivePortrait-windows>python app.py --mode onnx
No TensorRT Found
No PyCUDA Found
loading model: warping_spade
{'name': 'WarpingSpadeModel', 'predict_type': 'ort', 'model_path': './checkpoints/liveportrait_onnx/warping_spade.onnx'}
OnnxRuntime use ['CUDAExecutionProvider', 'CoreMLExecutionProvider', 'CPUExecutionProvider']
C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CoreMLExecutionProvider' is not in available provider names.Available providers: 'TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider'
  warnings.warn(
2024-07-21 12:05:13.9744303 [E:onnxruntime:Default, provider_bridge_ort.cc:1745 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1426 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*************** EP Error ***************
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.
 when using ['CUDAExecutionProvider', 'CoreMLExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2024-07-21 12:05:14.1670996 [E:onnxruntime:Default, provider_bridge_ort.cc:1745 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1426 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

Traceback (most recent call last):
  File "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\usable\FasterLivePortrait-windows\app.py", line 29, in <module>
    gradio_pipeline = GradioLivePortraitPipeline(infer_cfg)
  File "C:\usable\FasterLivePortrait-windows\src\pipelines\gradio_live_portrait_pipeline.py", line 24, in __init__
    super(GradioLivePortraitPipeline, self).__init__(cfg, **kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\pipelines\faster_live_portrait_pipeline.py", line 22, in __init__
    self.init(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\pipelines\faster_live_portrait_pipeline.py", line 25, in init
    self.init_models(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\pipelines\faster_live_portrait_pipeline.py", line 33, in init_models
    self.model_dict[model_name] = getattr(models, self.cfg.models[model_name]["name"])(
  File "C:\usable\FasterLivePortrait-windows\src\models\warping_spade_model.py", line 17, in __init__
    super(WarpingSpadeModel, self).__init__(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\models\base_model.py", line 20, in __init__
    self.predictor = get_predictor(**self.kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\models\predictor.py", line 241, in get_predictor
    return OnnxRuntimePredictorSingleton(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\models\predictor.py", line 233, in __new__
    OnnxRuntimePredictorSingleton._instance[model_path] = OnnxRuntimePredictor(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\models\predictor.py", line 178, in __init__
    self.onnx_model = onnxruntime.InferenceSession(model_path, providers=providers, sess_options=opts)
  File "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in __init__
    raise fallback_error from e
  File "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in __init__
    self._create_inference_session(self._fallback_providers, None)
  File "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.

Exception ignored in: <function OnnxRuntimePredictor.__del__ at 0x000001C628DE0F70>
Traceback (most recent call last):
  File "C:\usable\FasterLivePortrait-windows\src\models\predictor.py", line 215, in __del__
    del self.onnx_model
AttributeError: onnx_model
Exception ignored in: <function BaseModel.__del__ at 0x000001C628DE12D0>
Traceback (most recent call last):
  File "C:\usable\FasterLivePortrait-windows\src\models\base_model.py", line 51, in __del__
    if self.predictor is not None:
AttributeError: 'WarpingSpadeModel' object has no attribute 'predictor'

@watch-activity
Copy link

If the -m option is not specified, Python in the venv environment cannot be used, so please copy and paste the command below and run it.
python -m app.py --mode onnx

@warmshao
Copy link
Owner

If the -m option is not specified, Python in the venv environment cannot be used, so please copy and paste the command below and run it. python -m app.py --mode onnx

Thank you, I will give it a try tomorrow.

@warmshao
Copy link
Owner

I extracted the package again, now it's showing onnxruntime-gpu 1.18.1

I am literally fed up with Microsoft. All of their packages are so difficult to install on Windows, be it onnxrutime or deepspeed.

Microsoft Windows [Version 10.0.22631.3880]
(c) Microsoft Corporation. All rights reserved.

C:\usable\FasterLivePortrait-windows>venv\Scripts\activate

(venv) C:\usable\FasterLivePortrait-windows>pip list
Package                      Version
---------------------------- ------------------
absl-py                      2.1.0
accelerate                   0.32.1
aiofiles                     23.2.1
aiohttp                      3.9.3
aiosignal                    1.3.1
albumentations               1.3.1
aliyun-python-sdk-core       2.15.1
aliyun-python-sdk-kms        2.16.2
altair                       5.3.0
annotated-types              0.7.0
antlr4-python3-runtime       4.9.3
anyio                        4.4.0
astunparse                   1.6.3
async-timeout                4.0.3
attrs                        23.2.0
av                           11.0.0
beautifulsoup4               4.12.3
black                        24.3.0
boltons                      23.1.1
braceexpand                  0.1.7
cachetools                   5.3.3
certifi                      2024.7.4
cffi                         1.16.0
charset-normalizer           3.3.2
chumpy                       0.71
click                        8.1.7
cloudpickle                  3.0.0
colorama                     0.4.6
coloredlogs                  15.0.1
contourpy                    1.2.1
crcmod                       1.7
cryptography                 42.0.5
cycler                       0.12.1
Cython                       3.0.8
decorator                    4.4.2
dill                         0.3.8
dnspython                    2.6.1
docopt                       0.6.2
easydict                     1.11
einops                       0.8.0
email_validator              2.2.0
exceptiongroup               1.2.2
fastapi                      0.111.1
fastapi-cli                  0.0.4
ffmpeg-python                0.2.0
ffmpy                        0.3.2
filelock                     3.15.4
fire                         0.6.0
flatbuffers                  24.3.25
fonttools                    4.53.1
freetype-py                  2.4.0
frozenlist                   1.4.1
fsspec                       2024.6.1
future                       1.0.0
fvcore                       0.1.5.post20221221
gast                         0.4.0
gdown                        5.1.0
google-auth                  2.29.0
google-auth-oauthlib         0.4.6
google-pasta                 0.2.0
gradio                       4.37.1
gradio_client                1.0.2
grpcio                       1.62.1
h11                          0.14.0
h5py                         3.11.0
hmr2                         0.0.0
httpcore                     1.0.5
httptools                    0.6.1
httpx                        0.27.0
huggingface-hub              0.24.0
humanfriendly                10.0
hydra-core                   1.3.2
idna                         3.7
imageio                      2.33.1
imageio-ffmpeg               0.4.9
importlib_metadata           7.1.0
importlib_resources          6.4.0
inquirerpy                   0.3.4
insightface                  0.7.3
intel-openmp                 2021.4.0
iopath                       0.1.9
jax                          0.4.26
Jinja2                       3.1.4
jmespath                     0.10.0
joblib                       1.3.2
Js2Py                        0.74
jsonschema                   4.23.0
jsonschema-specifications    2023.12.1
kiwisolver                   1.4.5
lazy_loader                  0.3
libclang                     18.1.1
lightning-utilities          0.11.2
Markdown                     3.6
markdown-it-py               3.0.0
MarkupSafe                   2.1.5
matplotlib                   3.9.1
mdurl                        0.1.2
mkl                          2021.4.0
ml-dtypes                    0.4.0
model-index                  0.1.11
moviepy                      1.0.3
mpmath                       1.3.0
multidict                    6.0.5
mypy-extensions              1.0.0
networkx                     3.2.1
numpy                        1.26.4
oauthlib                     3.2.2
omegaconf                    2.3.0
onnx                         1.16.1
onnxruntime-gpu              1.18.1

I extracted again but now it's different error.

(venv) C:\usable\FasterLivePortrait-windows>python app.py --mode onnx
No TensorRT Found
No PyCUDA Found
loading model: warping_spade
{'name': 'WarpingSpadeModel', 'predict_type': 'ort', 'model_path': './checkpoints/liveportrait_onnx/warping_spade.onnx'}
OnnxRuntime use ['CUDAExecutionProvider', 'CoreMLExecutionProvider', 'CPUExecutionProvider']
C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CoreMLExecutionProvider' is not in available provider names.Available providers: 'TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider'
  warnings.warn(
2024-07-21 12:05:13.9744303 [E:onnxruntime:Default, provider_bridge_ort.cc:1745 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1426 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*************** EP Error ***************
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.
 when using ['CUDAExecutionProvider', 'CoreMLExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2024-07-21 12:05:14.1670996 [E:onnxruntime:Default, provider_bridge_ort.cc:1745 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1426 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

Traceback (most recent call last):
  File "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\usable\FasterLivePortrait-windows\app.py", line 29, in <module>
    gradio_pipeline = GradioLivePortraitPipeline(infer_cfg)
  File "C:\usable\FasterLivePortrait-windows\src\pipelines\gradio_live_portrait_pipeline.py", line 24, in __init__
    super(GradioLivePortraitPipeline, self).__init__(cfg, **kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\pipelines\faster_live_portrait_pipeline.py", line 22, in __init__
    self.init(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\pipelines\faster_live_portrait_pipeline.py", line 25, in init
    self.init_models(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\pipelines\faster_live_portrait_pipeline.py", line 33, in init_models
    self.model_dict[model_name] = getattr(models, self.cfg.models[model_name]["name"])(
  File "C:\usable\FasterLivePortrait-windows\src\models\warping_spade_model.py", line 17, in __init__
    super(WarpingSpadeModel, self).__init__(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\models\base_model.py", line 20, in __init__
    self.predictor = get_predictor(**self.kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\models\predictor.py", line 241, in get_predictor
    return OnnxRuntimePredictorSingleton(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\models\predictor.py", line 233, in __new__
    OnnxRuntimePredictorSingleton._instance[model_path] = OnnxRuntimePredictor(**kwargs)
  File "C:\usable\FasterLivePortrait-windows\src\models\predictor.py", line 178, in __init__
    self.onnx_model = onnxruntime.InferenceSession(model_path, providers=providers, sess_options=opts)
  File "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in __init__
    raise fallback_error from e
  File "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in __init__
    self._create_inference_session(self._fallback_providers, None)
  File "C:\Users\nitin\AppData\Roaming\Python\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.

Exception ignored in: <function OnnxRuntimePredictor.__del__ at 0x000001C628DE0F70>
Traceback (most recent call last):
  File "C:\usable\FasterLivePortrait-windows\src\models\predictor.py", line 215, in __del__
    del self.onnx_model
AttributeError: onnx_model
Exception ignored in: <function BaseModel.__del__ at 0x000001C628DE12D0>
Traceback (most recent call last):
  File "C:\usable\FasterLivePortrait-windows\src\models\base_model.py", line 51, in __del__
    if self.predictor is not None:
AttributeError: 'WarpingSpadeModel' object has no attribute 'predictor'

That's weird, the one I compiled myself should be onnxruntime-gpu=1.17.

@warmshao
Copy link
Owner

I have verified that transferring the Python virtual environment to another Windows computer does indeed cause some issues. I am still working on resolving this. Thank you for your feedback.

@warmshao
Copy link
Owner

Hi guys, Install-free, extract-and-play Windows package with TensorRT support now available! Please watch change log, really fast!!!

@Echolink50
Copy link

Hi guys, Install-free, extract-and-play Windows package with TensorRT support now available! Please watch change log, really fast!!!

Thanks for the Windows implementation. The inference speed is indeed faster but for some reason I have no net gain in speed because the "update infer cfg from true to true/false to false" step takes around 1 minutes depending on the video. What does this step do and is it possible to speed up or skip? Thanks for the amazing work.

@warmshao
Copy link
Owner

Thanks for the Windows implementation. The inference speed is indeed faster but for some reason I have no net gain in speed because the "update infer cfg from true to true/false to false" step takes around 1 minutes depending on the video. What does this step do and is it possible to speed up or skip? Thanks for the amazing work.

It is not the 'update infer cfg from true to true/false to false' step that is slow, but rather the slowness begins with the model inference and video generation process. After displaying 'update infer cfg from true to true/false to false', the model inference begins.

@Echolink50
Copy link

Ok thanks. Does this increase with video length? Is there a way to mitigate this? It makes the entire process take as long as the main branch for me. Maybe people with different hardware are having different speeds. Thanks again for all the amazing work you did.

@warmshao
Copy link
Owner

Ok thanks. Does this increase with video length? Is there a way to mitigate this? It makes the entire process take as long as the main branch for me. Maybe people with different hardware are having different speeds. Thanks again for all the amazing work you did.

Do you mean that the speed of TensorRT and PyTorch is the same? That doesn't seem very likely. Can you provide more information?

@Echolink50
Copy link

I mean for me on a 30 second video the cmd displays 3 lines of "update infer cfg from true to true/false to false" for about 2 minutes. After that the next part only takes a few seconds. That's what I mean by the total process time for me is similar.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants