Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conv2d.forward() takes 2 positional arguments but 3 were given #1037

Closed
1 of 4 tasks
hoang1007 opened this issue Oct 19, 2023 · 9 comments
Closed
1 of 4 tasks

Conv2d.forward() takes 2 positional arguments but 3 were given #1037

hoang1007 opened this issue Oct 19, 2023 · 9 comments

Comments

@hoang1007
Copy link

hoang1007 commented Oct 19, 2023

System Info

peft==0.6.0dev
diffusers==0.21.4

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder
  • My own task or dataset (give details below)

Reproduction

Sorry, but I don't know how to provide a simple code snippet because it's happend when I training diffusion. It require several steps like prepare data, training code...

LORA_TARGET_MODULES = [
        "to_q",
        "to_k",
        "to_v",
        "proj",
        "proj_in",
        "proj_out",
        "conv",
        "conv1",
        "conv2",
        "conv_shortcut",
        "to_out.0",
        "time_emb_proj",
        "ff.net.2",
    ]

unet = get_peft_model(
  model=unet,
  peft_config=LoraConfig(
      target_modules=LORA_TARGET_MODULES,
  )
)

Expected behavior

I think the reason is:

  • ResnetBlock2D used in Unet module has conv1 layer as type LoRACompatibleConv and it take 2 arguments
    Screenshot from 2023-10-19 10-22-13
  • But when I passed unet into get_peft_model(). the conv1 layer has changed to type peft.tuners.lora.layer.Conv2d and it take only 1 argument
    Screenshot from 2023-10-19 10-25-52
@kovalexal
Copy link
Contributor

@hoang1007 hi!

I faced the same issue and it is caused by the LoRACompatibleConv layer that was introduced in diffusers>=0.21.0.

See #916 #879 #937 (comment) for more detail.

If you downgrade your diffusers version to diffusers<0.21.0" everything should work fine.

@hoang1007
Copy link
Author

Thank u. I downgraded diffusers version to 0.20.2 and it's working. Does peft have plan to fix it?

@kovalexal
Copy link
Contributor

As far as I can tell, the fix is on their to-do list #937 (comment) #936

@hoang1007
Copy link
Author

Okei, thank u

@Xynonners
Copy link

As far as I can tell, the fix is on their to-do list #937 (comment) #936

Hi, after installing diffusers upstream the mentioned issue was fixed, but now I seem to be getting a different issue also related to diffusers>0.21.0.

  File "/vol/zraid1/Projects/AI/Self/mofusp/mofusp/new/UpDraft/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return self._call_impl(*args, **kwargs)
           ^^^    return forward_call(*args, **kwargs)
^^^ ^   ^  ^   ^  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^  File "/vol/zraid1/Projects/AI/Self/mofusp/mofusp/new/UpDraft/venv/lib/python3.11/site-packages/peft/peft_model.py", line 488, in forward
^^^^^^^^^
  File "/vol/zraid1/Projects/AI/Self/mofusp/mofusp/new/UpDraft/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return self.get_base_model()(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    ^^return forward_call(*args, **kwargs)^^^
^^^
  File "/vol/zraid1/Projects/AI/Self/mofusp/mofusp/new/UpDraft/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/vol/zraid1/Projects/AI/Self/mofusp/mofusp/new/UpDraft/venv/lib/python3.11/site-packages/peft/peft_model.py", line 488, in forward
return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vol/zraid1/Projects/AI/Self/mofusp/mofusp/new/UpDraft/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return self.get_base_model()(*args, **kwargs)
           ^^^^    return forward_call(*args, **kwargs)^
^^ ^  ^  ^  ^   ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^  File "/vol/zraid1/Projects/AI/Self/mofusp/mofusp/new/UpDraft/venv/lib/python3.11/site-packages/diffusers/models/unet_2d_condition.py", line 1158, in forward
^^^^^^^^^^^^^^^
  File "/vol/zraid1/Projects/AI/Self/mofusp/mofusp/new/UpDraft/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    unscale_lora_layers(self, lora_scale)
    return self._call_impl(*args, **kwargs)
  File "/vol/zraid1/Projects/AI/Self/mofusp/mofusp/new/UpDraft/venv/lib/python3.11/site-packages/diffusers/utils/peft_utils.py", line 112, in unscale_lora_layers
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vol/zraid1/Projects/AI/Self/mofusp/mofusp/new/UpDraft/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
          module.unscale_layer(weight)
     TypeError: ^LoHaLayer.unscale_layer() takes 1 positional argument but 2 were given^

@kovalexal
Copy link
Contributor

@Xynonners hi! Thanks for bringing this up!

As far as I can tell, this issue may be related to the recent changes in diffusers+peft compatibility (#1029, #1034).

I have a PR for the LoKr adapter #978 (but it is not in the upstream right now) and it also should fix the LoHa adapter with my recent changes #978 (comment) (as they share a lot of common code).

While we are almost at the finish line with this PR, could you please try out the version from PR #978?

@Xynonners
Copy link

Xynonners commented Oct 28, 2023

@Xynonners hi! Thanks for bringing this up!

As far as I can tell, this issue may be related to the recent changes in diffusers+peft compatibility (#1029, #1034).

I have a PR for the LoKr adapter #978 (but it is not in the upstream right now) and it also should fix the LoHa adapter with my recent changes #978 (comment) (as they share a lot of common code).

While we are almost at the finish line with this PR, could you please try out the version from PR #978?

Alrighty, tested and it seems to work. Waiting for merge now😁

EDIT: @kovalexal oops, forgot to update my diffusers, false call.
TypeError: LoHaLayer.unscale_layer() takes 1 positional argument but 2 were given
still seems to happen.

@kovalexal
Copy link
Contributor

Hmmm, that seems strange.

Are you still using the latest version from #978 PR?

According to that code, unscale_layer should accept 2 arguments - the first one is self and the second one is scale.

Here is also a small code that demonstrates that we are able to pass scale value to unscale_layer layers in #978 branch:

import torch
from peft.tuners.loha.model import Linear

lin = Linear(
    in_features=10, out_features=10, bias=False,
    device="cpu", dtype=torch.float32, r=4, alpha=4, adapter_name="default"
)


lin.scale_layer(2.0)
lin.unscale_layer(2.0)

I am not sure, maybe there are some missing parts in diffusers + peft integration for LoHa adapters, do you happen to have some code snippet + library versions you are using for me to investigate what is happening here?

@Xynonners
Copy link

Hmmm, that seems strange.

Are you still using the latest version from #978 PR?

According to that code, unscale_layer should accept 2 arguments - the first one is self and the second one is scale.

Here is also a small code that demonstrates that we are able to pass scale value to unscale_layer layers in #978 branch:

import torch
from peft.tuners.loha.model import Linear

lin = Linear(
    in_features=10, out_features=10, bias=False,
    device="cpu", dtype=torch.float32, r=4, alpha=4, adapter_name="default"
)


lin.scale_layer(2.0)
lin.unscale_layer(2.0)

I am not sure, maybe there are some missing parts in diffusers + peft integration for LoHa adapters, do you happen to have some code snippet + library versions you are using for me to investigate what is happening here?

ah sorry, it seems like the pip cache was screwing with the install from git+, checked the site-packages and it was missing stuff.

updated correctly this time and it works 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants