-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Windows version not working #17
Comments
So regarding this error
|
Did you not successfully activate my virtual environment? I see that your version of onnxruntime is onnxruntime-gpu 1.16.3, but the virtual environment should be onnxruntime-gpu 1.17.0. Please note that if you are using PowerShell, the activation script for the environment should be .\venv\Scripts\activate.ps1. |
Since venv environments will not work when moved, we recommend using the mobile Python Embeddable for Windows package. Also, it worked if I changed the paths of pyvenv.cfg and activate.bat to suit my environment. |
*No TensorRT Found / No PyCUDA Found displayed。It will not affect the normal operation of onnxruntime-gpu. Regarding the issue with the Python virtual environment, I will conduct further verification. |
I extracted the package again, now it's showing onnxruntime-gpu 1.18.1 I am literally fed up with Microsoft. All of their packages are so difficult to install on Windows, be it onnxrutime or deepspeed.
I extracted again but now it's different error.
|
If the -m option is not specified, Python in the venv environment cannot be used, so please copy and paste the command below and run it. |
Thank you, I will give it a try tomorrow. |
That's weird, the one I compiled myself should be onnxruntime-gpu=1.17. |
I have verified that transferring the Python virtual environment to another Windows computer does indeed cause some issues. I am still working on resolving this. Thank you for your feedback. |
Hi guys, Install-free, extract-and-play Windows package with TensorRT support now available! Please watch change log, really fast!!! |
Thanks for the Windows implementation. The inference speed is indeed faster but for some reason I have no net gain in speed because the "update infer cfg from true to true/false to false" step takes around 1 minutes depending on the video. What does this step do and is it possible to speed up or skip? Thanks for the amazing work. |
It is not the 'update infer cfg from true to true/false to false' step that is slow, but rather the slowness begins with the model inference and video generation process. After displaying 'update infer cfg from true to true/false to false', the model inference begins. |
Ok thanks. Does this increase with video length? Is there a way to mitigate this? It makes the entire process take as long as the main branch for me. Maybe people with different hardware are having different speeds. Thanks again for all the amazing work you did. |
Do you mean that the speed of TensorRT and PyTorch is the same? That doesn't seem very likely. Can you provide more information? |
I mean for me on a 30 second video the cmd displays 3 lines of "update infer cfg from true to true/false to false" for about 2 minutes. After that the next part only takes a few seconds. That's what I mean by the total process time for me is similar. |
I downloaded and extracted
https://drive.google.com/file/d/1ijqDlMAYqAVlqwqlXDpjBS5i3A6R_f7M/view?usp=sharing
activated virtual environment and
python app.py --mode onnx
It says "No PyCUDA Found" even though environment variable is set
nvcc --version also works
The text was updated successfully, but these errors were encountered: