You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Loading ANY previously working LORA on 4-bit 128g GPTQ file fails
Relevant logs:
File "H:\oobabooga_windows\text-generation-webui\modules\LoRA.py", line 80, in add_lora_to_model
shared.model = PeftModel.from_pretrained(shared.model, Path(f"{shared.args.lora_dir}/{lora_names[0]}"), **params)
File "H:\oobabooga_windows\installer_files\env\lib\site-packages\peft\peft_model.py", line 169, in from_pretrained
PeftConfig.from_pretrained(model_id, subfolder=kwargs.get("subfolder", None), kwargs).peft_type
File "H:\oobabooga_windows\installer_files\env\lib\site-packages\peft\utils\config.py", line 114, in from_pretrained
config = cls(kwargs)
TypeError: PeftConfig.init() got an unexpected keyword argument 'dtype'
Is there an existing issue for this?
I have searched the existing issues
Reproduction
Loading TheBloke_Wizard-Vicuna-13B-Uncensored-GPTQ
then trying to load any LORA that worked previously, always result in the same error
Screenshot
No response
Logs
the code in lora.py add params['dtype'] and 'device_maps'if not shared.args.cpu:
params['dtype'] = shared.model.dtype
if hasattr(shared.model, "hf_device_map"):
params['device_map'] = {"base_model.model." + k: v fork, vinshared.model.hf_device_map.items()}
elif shared.args.load_in_8bit:
params['device_map'] = {'': 0}
Will ultimatelly result in the error unexpected keyword argument 'dtype'
because kwargs are now sent there (unlike in previous pull)
down in peft config.py
config = cls(**kwargs)
System Info
windows
The text was updated successfully, but these errors were encountered:
Describe the bug
I FOUND THE CULPRIT:
Two days ago was peft bumped from git+https://github.com/huggingface/peft@3714aa2fff158fdfa637b2b65952580801d890b2
to git+https://github.com/huggingface/peft@e45529b149c7f91ec1d4d82a5a152ef56c56cb94
When moving back to git+https://github.com/huggingface/peft@3714aa2fff158fdfa637b2b65952580801d890b2
The bellow works because in the old version peft didn't push kwargs to the PeftConfig
Loading ANY previously working LORA on 4-bit 128g GPTQ file fails
Relevant logs:
File "H:\oobabooga_windows\text-generation-webui\modules\LoRA.py", line 80, in add_lora_to_model
shared.model = PeftModel.from_pretrained(shared.model, Path(f"{shared.args.lora_dir}/{lora_names[0]}"), **params)
File "H:\oobabooga_windows\installer_files\env\lib\site-packages\peft\peft_model.py", line 169, in from_pretrained
PeftConfig.from_pretrained(model_id, subfolder=kwargs.get("subfolder", None), kwargs).peft_type
File "H:\oobabooga_windows\installer_files\env\lib\site-packages\peft\utils\config.py", line 114, in from_pretrained
config = cls(kwargs)
TypeError: PeftConfig.init() got an unexpected keyword argument 'dtype'
Is there an existing issue for this?
Reproduction
Loading TheBloke_Wizard-Vicuna-13B-Uncensored-GPTQ
then trying to load any LORA that worked previously, always result in the same error
Screenshot
No response
Logs
System Info
The text was updated successfully, but these errors were encountered: