Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix NotImplementedError for no bias. #946

Merged
merged 2 commits into from
Sep 20, 2023

Conversation

Datta0
Copy link
Contributor

@Datta0 Datta0 commented Sep 19, 2023

I tried creating a LoRA of LLAMA-2 model with the following config. But it fails with the shown error. I took a look into the implementation and thought this might fix it for me and it indeed does.

model = AutoModelForCausalLM.from_pretrained(
    'meta-llama/Llama-2-7b-hf',
    device_map = 'auto',
    torch_dtype = torch.bfloat16
)

peft_config = LoraConfig(
    r = 16,
    lora_alpha = 64,
    lora_dropout = 0.1,
    target_modules=['q_proj','v_proj'],
    bias = None,
    task_type = "CAUSAL_LM"
)

peft_model = get_peft_model(
    model,
    peft_config
)

With default active_adapter the bias is None and we're only checking it against "none".

Here's the full error.

	"name": "NotImplementedError",
	"message": "",
	"stack": "---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
/home/datta0/temp.ipynb Cell 4 line 1
----> <a href='vscode-notebook-cell://ssh-remote%2B10.112.26.97/home/datta0/temp.ipynb#W3sdnNjb2RlLXJlbW90ZQ%3D%3D?line=0'>1</a> peft_model = get_peft_model(
      <a href='vscode-notebook-cell://ssh-remote%2B10.112.26.97/home/datta0/temp.ipynb#W3sdnNjb2RlLXJlbW90ZQ%3D%3D?line=1'>2</a>     model,
      <a href='vscode-notebook-cell://ssh-remote%2B10.112.26.97/home/datta0/temp.ipynb#W3sdnNjb2RlLXJlbW90ZQ%3D%3D?line=2'>3</a>     peft_config
      <a href='vscode-notebook-cell://ssh-remote%2B10.112.26.97/home/datta0/temp.ipynb#W3sdnNjb2RlLXJlbW90ZQ%3D%3D?line=3'>4</a> )

File ~/.py3nv/lib/python3.10/site-packages/peft/mapping.py:106, in get_peft_model(model, peft_config, adapter_name)
    104 if peft_config.is_prompt_learning:
    105     peft_config = _prepare_prompt_learning_config(peft_config, model_config)
--> 106 return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name)

File ~/.py3nv/lib/python3.10/site-packages/peft/peft_model.py:889, in PeftModelForCausalLM.__init__(self, model, peft_config, adapter_name)
    888 def __init__(self, model, peft_config: PeftConfig, adapter_name=\"default\"):
--> 889     super().__init__(model, peft_config, adapter_name)
    890     self.base_model_prepare_inputs_for_generation = self.base_model.prepare_inputs_for_generation

File ~/.py3nv/lib/python3.10/site-packages/peft/peft_model.py:111, in PeftModel.__init__(self, model, peft_config, adapter_name)
    109 if not peft_config.is_prompt_learning:
    110     self.peft_config[adapter_name] = peft_config
--> 111     self.base_model = PEFT_TYPE_TO_MODEL_MAPPING[peft_config.peft_type](
    112         self.base_model, self.peft_config, adapter_name
    113     )
    114     self.set_additional_trainable_modules(peft_config, adapter_name)
    115 else:

File ~/.py3nv/lib/python3.10/site-packages/peft/tuners/lora.py:274, in LoraModel.__init__(self, model, config, adapter_name)
    273 def __init__(self, model, config, adapter_name) -> None:
--> 274     super().__init__(model, config, adapter_name)

File ~/.py3nv/lib/python3.10/site-packages/peft/tuners/tuners_utils.py:88, in BaseTuner.__init__(self, model, peft_config, adapter_name)
     85 if not hasattr(self, \"config\"):
     86     self.config = {\"model_type\": \"custom\"}
---> 88 self.inject_adapter(self.model, adapter_name)
     90 # Copy the peft_config in the injected model.
     91 self.model.peft_config = self.peft_config

File ~/.py3nv/lib/python3.10/site-packages/peft/tuners/tuners_utils.py:227, in BaseTuner.inject_adapter(self, model, adapter_name)
    221 if not is_target_modules_in_base_model:
    222     raise ValueError(
    223         f\"Target modules {peft_config.target_modules} not found in the base model. \"
    224         f\"Please check the target modules and try again.\"
    225     )
--> 227 self._mark_only_adapters_as_trainable()
    229 if self.peft_config[adapter_name].inference_mode:
    230     for n, p in self.model.named_parameters():

File ~/.py3nv/lib/python3.10/site-packages/peft/tuners/lora.py:412, in LoraModel._mark_only_adapters_as_trainable(self)
    410             m.bias.requires_grad = True
    411 else:
--> 412     raise NotImplementedError

NotImplementedError: "

@BenjaminBossan
Copy link
Member

Thanks for bringing up this issue. I think it was quite intentional to not allow None for the bias parameter in the config, as this parameter is only supposed to be a string. Therefore, I wouldn't change the code to allow None. However, it can be useful to augment the error message similar as you did, showing the user the value they passed and what values are actually possible.

Also, technically, NotImplementedError is incorrect here, it should be a ValueError. Not sure if we want to change that or keep it for consistency with the rest of the code base, which also uses it incorrectly.

@Datta0
Copy link
Contributor Author

Datta0 commented Sep 19, 2023

I too think ValueError would be appropriate here. I can make that change and remove the check for None. How does that sound?

@BenjaminBossan
Copy link
Member

Let's keep the NotImplementedError for now, as there are multiple instances and maybe we do want to keep it for consistency (not only within PEFT, but also the larger HF ecosystem).

The previous commit makes sure the error is descriptive
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding the error message.

@BenjaminBossan BenjaminBossan merged commit ba0477f into huggingface:main Sep 20, 2023
11 checks passed
@Datta0 Datta0 deleted the peft_None_bias branch September 20, 2023 09:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants