-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sending peft back to original (issue with kwargs in PeftConfig) and lora.py changes #2629
Conversation
Doesn't work for me on Linux, still getting an error:
|
What command are you using exactly @FartyPants and @ashleykleynhans? Is this monkeypatch + gptq-for-llama, 16bit, autogptq? |
Never mind, my bad, I deleted my venv and recreated it and can confirm that this fix resolves the issue, thank you, nice work! |
It happens when applying a LoRA, and also when using the |
It only affects loading Lora (using gptq_for_llama), the new perf commit added kwargs in PEFT_TYPE_TO_CONFIG_MAPPING. (I opened issue with peft, they clearly didn't think it through) As for the other edit in lora.py, this is to avoid having 'default' in adapter_name, instead have the actual lora name. This will make life later easier when you want to switch between the adapters as I do: https://github.com/FartyPants/Loraswitch |
Got back info from PEFT people, this is fixed in: |
Thanks for the update @FartyPants |
Should be fixed in huggingface/peft#575 |
But not yet fixed in webui, right? We have multiple closed issues about this problem, but the problem is not yet fixed, it seems. It's misleading, imo. |
Should be fixed now 490a179 |
I'm still encountering an error. I ran "update_wsl" and started with "start_wsl". When the model is loaded onto the CPU, it works fine. However, I'm still experiencing issues when loading it onto the GPU. git+https://github.com/huggingface/peft@03eb378eb914fbee709ff7c86ba5b1d033b89524 is included in my requirements.txt, but I am still encountering errors. Anyway, thank you!!!!! *(UPDATE) After manually reinstalling the package, it now functions properly. However, I'm unclear as to what the issue might be with the one-click install package (WSL). |
see peft_model.py line 169, # load the config, the new repo passes kwargs to PeftConfig that will break ooba
PeftConfig doesn't want 'dtype' and 'device_map'