You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
delete "init(model)" because it is not working and replace AutoModelForCausalLM by AutoAdapterModel
trainer.train
Expected behavior
Fine-tuning the model
Real behavior
I'm trying to add an adapter to a quantized model. I would like to use not only the LORA adapter available in PEFT, but also other adapters.
However, as soon as I run the training, the error appears:
ValueError: You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of the quantized model to correctly perform fine-tuning. Please see: https://huggingface.co/docs/transformers/peft for more details
i tried to use PEFT features like prepare_model_for_kbit_training but couldn't add non PEFT adapter
The text was updated successfully, but these errors were encountered:
Hey @mkgs210, training on quantized models in the style of e.g. QLoRA is not currently supported by the released version ofadapters. There's a WIP pull request for adding this support here though: #663.
Environment info
adapters
version: 0.1.1Information
Model I am using "ai-forever/rugpt3large_based_on_gpt2":
Language I am using the model on Russian:
Adapter setup I am using (if any): BnConfig, SeqBnConfig, DoubleSeqBnConfig, PrefixTuningConfig, LoRAConfig, IA3Config, PromptTuningConfig, MAMConfig, UniPELTConfig
The problem arises when using:
and https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing&pli=1#scrollTo=Ybeyl20n3dYH
The tasks I am working on is:
To reproduce
Steps to reproduce the behavior:
Expected behavior
Fine-tuning the model
Real behavior
I'm trying to add an adapter to a quantized model. I would like to use not only the LORA adapter available in PEFT, but also other adapters.
However, as soon as I run the training, the error appears:
ValueError: You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of the quantized model to correctly perform fine-tuning. Please see: https://huggingface.co/docs/transformers/peft for more details
i tried to use PEFT features like prepare_model_for_kbit_training but couldn't add non PEFT adapter
The text was updated successfully, but these errors were encountered: