Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorRT for stable-diffusion-webui-forge #1543

Open
TimmekHW opened this issue Aug 27, 2024 · 4 comments
Open

TensorRT for stable-diffusion-webui-forge #1543

TimmekHW opened this issue Aug 27, 2024 · 4 comments

Comments

@TimmekHW
Copy link

Pls, add new tensorrt libraly for easy installing TensorRT

Version: f2.0.1v1.10.1-previous-218-g643a485d
Commit hash: 643a485d1aff11acc657b24ee32d019e28d85b07
removing old version of tensorrt
Launching Web UI with arguments:
Total VRAM 24564 MB, total RAM 65322 MB
pytorch version: 2.4.0+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: False
G:\webui_forge_cu124_torch24\system\python\lib\site-packages\transformers\utils\hub.py:127: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
  warnings.warn(
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: G:\webui_forge_cu124_torch24\webui\models\ControlNetPreprocessor
*** Error loading script: trt.py
    Traceback (most recent call last):
      File "G:\webui_forge_cu124_torch24\webui\modules\scripts.py", line 525, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
      File "G:\webui_forge_cu124_torch24\webui\modules\script_loading.py", line 13, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 883, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "G:\webui_forge_cu124_torch24\webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 13, in <module>
        import ui_trt
      File "G:\webui_forge_cu124_torch24\webui\extensions\Stable-Diffusion-WebUI-TensorRT\ui_trt.py", line 18, in <module>
        from exporter import export_onnx, export_trt, export_lora
      File "G:\webui_forge_cu124_torch24\webui\extensions\Stable-Diffusion-WebUI-TensorRT\exporter.py", line 23, in <module>
        from utilities import Engine
      File "G:\webui_forge_cu124_torch24\webui\extensions\Stable-Diffusion-WebUI-TensorRT\utilities.py", line 32, in <module>
        import tensorrt as trt
      File "G:\webui_forge_cu124_torch24\system\python\lib\site-packages\tensorrt\__init__.py", line 18, in <module>
        from tensorrt_bindings import *
    ModuleNotFoundError: No module named 'tensorrt_bindings'

---
2024-08-27 18:04:09,451 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'G:\\webui_forge_cu124_torch24\\webui\\models\\Stable-diffusion\\flux_dev.safetensors', 'hash': '4af4416b'}, 'vae_filename': None, 'unet_storage_dtype': None}
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 9.6s (prepare environment: 2.1s, launcher: 1.3s, import torch: 2.6s, initialize shared: 0.1s, other imports: 0.5s, load scripts: 1.1s, create ui: 1.2s, gradio launch: 0.6s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}```

PS G:\webui_forge_cu124_torch24\system\python> G:\webui_forge_cu124_torch24\system\python\python.exe -m pip install --upgrade --force-reinstall torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
PS G:\webui_forge_cu124_torch24\system\python> G:\webui_forge_cu124_torch24\system\python\python.exe -m pip install --upgrade --force-reinstall optimum
PS G:\webui_forge_cu124_torch24\system\python> G:\webui_forge_cu124_torch24\system\python\python.exe -m pip install --upgrade --force-reinstall blendmodes
PS G:\webui_forge_cu124_torch24\system\python> G:\webui_forge_cu124_torch24\system\python\python.exe -m pip install --upgrade --force-reinstall tensorrt-cu12

@TimmekHW
Copy link
Author

run.bat

@echo off

call environment.bat

cd %~dp0webui
call webui-user.bat

Repository owner deleted a comment Aug 27, 2024
Repository owner deleted a comment from TimmekHW Aug 27, 2024
Repository owner deleted a comment from yldzmuhammed Aug 27, 2024
@Seedmanc
Copy link

that's some triggerhappy moderation instead of actual implementation of what's requested
i'd like to see TensorRT too so I don't have to install a separate Automatic instance just for them

@jswag245
Copy link

jswag245 commented Nov 7, 2024

TensorRT is not compatible with Forge at the moment, I think I've read somewhere that Illyasviel does not plan to add TensorRT since they believe that it's not needed. As a result, I recommend closing the issue until we get further news/announcements.

@TimmekHW

@JohnRDOrazio
Copy link

I finally succeeded in getting tensorrt to install in Forge, however when clicking on "Export default engine" I get the error:

AttributeError: 'FakeInitialModel' object has no attribute 'is_sdxl'

I found an issue on the Stable-Diffusion-WebUI-TensorRT repo that mentions this, but there they say it's a forge issue, so should be addressed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants
@JohnRDOrazio @Seedmanc @TimmekHW @jswag245 and others