Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lora doesn't work with TensorRT backend #85

Open
2 tasks done
0xprincess opened this issue May 14, 2023 · 0 comments
Open
2 tasks done

Lora doesn't work with TensorRT backend #85

0xprincess opened this issue May 14, 2023 · 0 comments

Comments

@0xprincess
Copy link

Describe the bug

Getting the following error when trying to use lora with TensorRT enabled. Works fine with Diffusers and Lora and also with TensorRT without lora.

Traceback (most recent call last):
  File "C:\Users\Princess\projects\Radiata\venv\lib\site-packages\gradio\routes.py", line 399, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\Princess\projects\Radiata\venv\lib\site-packages\gradio\blocks.py", line 1299, in process_api
    result = await self.call_function(
  File "C:\Users\Princess\projects\Radiata\venv\lib\site-packages\gradio\blocks.py", line 1036, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\Princess\projects\Radiata\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\Users\Princess\projects\Radiata\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "C:\Users\Princess\projects\Radiata\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "C:\Users\Princess\projects\Radiata\venv\lib\site-packages\gradio\utils.py", line 488, in async_iteration
    return next(iterator)
  File "C:\Users\Princess\projects\Radiata\modules\tabs\generate.py", line 51, in wrapper
    yield from fn(self, opts, plugin_values)
  File "C:\Users\Princess\projects\Radiata\modules\tabs\generate.py", line 94, in generate_image
    for data in model_manager.sd_model(opts, plugin_data):
  File "C:\Users\Princess\projects\Radiata\modules\model.py", line 185, in __call__
    images = feature.result().images
  File "C:\Anaconda3\envs\radiata\lib\concurrent\futures\_base.py", line 451, in result
    return self.__get_result()
  File "C:\Anaconda3\envs\radiata\lib\concurrent\futures\_base.py", line 403, in __get_result
    raise self._exception
  File "C:\Anaconda3\envs\radiata\lib\concurrent\futures\thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "C:\Users\Princess\projects\Radiata\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Princess\projects\Radiata\modules\diffusion\pipelines\diffusers.py", line 427, in __call__
    self.load_resources(
  File "C:\Users\Princess\projects\Radiata\modules\diffusion\pipelines\tensorrt.py", line 160, in load_resources
    super().load_resources(
  File "C:\Users\Princess\projects\Radiata\modules\diffusion\pipelines\diffusers.py", line 160, in load_resources
    LoadResourceEvent.call_event(LoadResourceEvent(pipe=self))
  File "C:\Users\Princess\projects\Radiata\api\events\__init__.py", line 32, in call_event
    handler(event)
  File "C:\Users\Princess\projects\Radiata\modules\diffusion\networks\__init__.py", line 76, in load_network_modules
    restore_networks(e.pipe.unet, e.pipe.text_encoder)
  File "C:\Users\Princess\projects\Radiata\modules\diffusion\networks\__init__.py", line 48, in restore_networks
    network.restore(*modules)
  File "C:\Users\Princess\projects\Radiata\modules\diffusion\networks\lora.py", line 269, in restore
    for child in module.modules():
AttributeError: 'UNet2DConditionModelEngine' object has no attribute 'modules'

Reproduction

Fresh clean install

Expected behavior

Lora working without errors

System Info

  • Windows 10 22h2
  • RTX 4090
  • 32gb RAM
  • Python 3.10.8
  • TensorRT-8.6.1.6
  • CUDNN-v8.9 (v8.8 also tried)
  • Nvidia drivers 531.41

Additional context

No response

Validations

  • Read the docs.
  • Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant