Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sending peft back to original (issue with kwargs in PeftConfig) and lora.py changes #2629

Closed
wants to merge 3 commits into from

Conversation

FartyPants
Copy link
Contributor

see peft_model.py line 169, # load the config, the new repo passes kwargs to PeftConfig that will break ooba
PeftConfig doesn't want 'dtype' and 'device_map'

  1. check if newer peft fixes it
  2. the main should stay on old peft untill this is fixed otherwise loading LORA doesn't work

@FartyPants
Copy link
Contributor Author

#2623

@ashleykleynhans
Copy link
Contributor

Doesn't work for me on Linux, still getting an error:

INFO:Applying the following LoRAs to PygmalionAI_pygmalion-6b: snappic
Traceback (most recent call last):
  File "/workspace/text-generation-webui/server.py", line 1081, in <module>
    add_lora_to_model(shared.args.lora)
  File "/workspace/text-generation-webui/modules/LoRA.py", line 80, in add_lora_to_model
    shared.model = PeftModel.from_pretrained(shared.model, Path(f"{shared.args.lora_dir}/{lora_names[0]}"),adapter_name=lora_names[0], **params)
  File "/workspace/venv/lib/python3.10/site-packages/peft/peft_model.py", line 169, in from_pretrained
    PeftConfig.from_pretrained(model_id, subfolder=kwargs.get("subfolder", None), **kwargs).peft_type
  File "/workspace/venv/lib/python3.10/site-packages/peft/utils/config.py", line 114, in from_pretrained
    config = cls(**kwargs)
TypeError: PeftConfig.__init__() got an unexpected keyword argument 'dtype'

@oobabooga
Copy link
Owner

What command are you using exactly @FartyPants and @ashleykleynhans? Is this monkeypatch + gptq-for-llama, 16bit, autogptq?

@ashleykleynhans
Copy link
Contributor

Never mind, my bad, I deleted my venv and recreated it and can confirm that this fix resolves the issue, thank you, nice work!

@ashleykleynhans
Copy link
Contributor

What command are you using exactly @FartyPants and @ashleykleynhans? Is this monkeypatch + gptq-for-llama, 16bit, autogptq?

It happens when applying a LoRA, and also when using the --lora command line argument at startup to use a LoRA, but this PR fixes it.

@FartyPants
Copy link
Contributor Author

FartyPants commented Jun 11, 2023

What command are you using exactly @FartyPants and @ashleykleynhans? Is this monkeypatch + gptq-for-llama, 16bit, autogptq?

It only affects loading Lora (using gptq_for_llama), the new perf commit added kwargs in PEFT_TYPE_TO_CONFIG_MAPPING. (I opened issue with peft, they clearly didn't think it through)

As for the other edit in lora.py, this is to avoid having 'default' in adapter_name, instead have the actual lora name. This will make life later easier when you want to switch between the adapters as I do: https://github.com/FartyPants/Loraswitch

@FartyPants
Copy link
Contributor Author

FartyPants commented Jun 13, 2023

Got back info from PEFT people, this is fixed in:
huggingface/peft#561
I'm closing this PR as redundant, wait for merge from PERF and then bump it up in webui.

@FartyPants FartyPants closed this Jun 13, 2023
@oobabooga
Copy link
Owner

Thanks for the update @FartyPants

@younesbelkada
Copy link

Should be fixed in huggingface/peft#575

@Nixellion
Copy link

Nixellion commented Jun 18, 2023

But not yet fixed in webui, right?

We have multiple closed issues about this problem, but the problem is not yet fixed, it seems. It's misleading, imo.

@oobabooga
Copy link
Owner

Should be fixed now 490a179

@93041025
Copy link

93041025 commented Jun 19, 2023

Should be fixed now 490a179

I'm still encountering an error. I ran "update_wsl" and started with "start_wsl". When the model is loaded onto the CPU, it works fine. However, I'm still experiencing issues when loading it onto the GPU. git+https://github.com/huggingface/peft@03eb378eb914fbee709ff7c86ba5b1d033b89524 is included in my requirements.txt, but I am still encountering errors. Anyway, thank you!!!!!

*(UPDATE) After manually reinstalling the package, it now functions properly. However, I'm unclear as to what the issue might be with the one-click install package (WSL).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants