Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new feature of SafeLoRA #2201

Open
wants to merge 29 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
842a424
change variablle names and modify the class of _SafetensorLoader
chiayi-hsu Nov 6, 2024
7610aa1
modify safelora.py
chiayi-hsu Nov 6, 2024
00dac0b
docs, refactor: Add the config and function description./ Modify the …
chiayi-hsu Nov 6, 2024
d962bdf
fix: Adding the dtype argument that users can select.
chiayi-hsu Nov 6, 2024
609891e
style: Adding the annotation of SafeLoraConfig.
chiayi-hsu Nov 6, 2024
65ad744
docs: Adding an example of safelora.
chiayi-hsu Nov 6, 2024
c682be3
Style: Change READEME of safelora.
chiayi-hsu Nov 6, 2024
8d7ea67
Update examples/safelora/README.md
chiayi-hsu Nov 8, 2024
9be2429
Update examples/safelora/README.md
chiayi-hsu Nov 8, 2024
5ee3c83
Update src/peft/utils/safelora.py
chiayi-hsu Nov 12, 2024
e8ab799
Merge remote-tracking branch 'upstream/main' into main
chiayi-hsu Nov 14, 2024
b27c9e2
docs/refactors: Add more steps of the inference example./ Modify safe…
chiayi-hsu Nov 14, 2024
71e9467
docs: Change README.md
chiayi-hsu Nov 14, 2024
9b0b06e
Update examples/safelora/README.md
chiayi-hsu Nov 25, 2024
4e1e702
Update examples/safelora/README.md
chiayi-hsu Nov 25, 2024
fdb7af5
Update src/peft/utils/safelora.py
chiayi-hsu Nov 25, 2024
7ec02e7
Update src/peft/utils/safelora.py
chiayi-hsu Nov 25, 2024
6bda1ba
Update src/peft/utils/safelora.py
chiayi-hsu Dec 3, 2024
d23a548
Update src/peft/utils/safelora.py
chiayi-hsu Dec 3, 2024
095c1a5
Update src/peft/utils/safelora.py
chiayi-hsu Dec 3, 2024
1350bde
Update src/peft/utils/safelora.py
chiayi-hsu Dec 3, 2024
dd39269
Update src/peft/utils/safelora.py
chiayi-hsu Dec 3, 2024
dacbd90
Update src/peft/utils/safelora.py
chiayi-hsu Dec 3, 2024
c1026a1
Merge remote-tracking branch 'upstream/main' into main
chiayi-hsu Dec 3, 2024
20fbc5a
Merge remote-tracking branch 'upstream/main' into main
chiayi-hsu Jan 2, 2025
ffa3f8d
feat: update argument defaults and rewrite documentation
chiayi-hsu Jan 9, 2025
d21bb8e
docs: update docstring and style
chiayi-hsu Jan 9, 2025
603b67b
Update src/peft/utils/safelora.py
chiayi-hsu Jan 15, 2025
4ac0d29
Update src/peft/utils/safelora.py
chiayi-hsu Jan 15, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 46 additions & 0 deletions examples/safelora/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Safe LoRA

The official code of Safe LoRA: The Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add a sentence or two about what Safe LoRA does and when it could be interesting for users to use it, similar to the beginning of the docstring of apply_safelora.


## Quick Start

### Get Weights with SafeLoRA
Please import the `SafeLoraConfig` and `apply_safelora` first.
Then, fill in the paths for the base, aligned, and PEFT models according to your needs. There are two types of `select_layers_type`: `threshold` and `number`. The `threshold` type will determine how many layers will be projected based on the value you set. The `number` type directly specifies the number of projected layers. `save_weights=True` will save and replace your original peft model weights.

```python

from peft.utils.safelora import SafeLoraConfig, apply_safelora

peft_path = "../finetuneLLM/finetuned_models/samsumBad-7b-fp16-peft-seed-42"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's also put a placeholder for this path. In this case, it's the same as <SafeLoRA-path> below, right?

config = SafeLoraConfig(
base_model_path="meta-llama/Llama-2-7b-hf",
aligned_model_path="TheBloke/Llama-2-7B-Chat-fp16",
Comment on lines +17 to +18
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, we use concrete exmaples like Llama2 7b. Below, we use a placeholder for the model id: <base-model-id>, which should correspond to the base_model_path. Let's make this consistent: Either use placeholders here (which I prefer) or use the real model path below.

peft_model_path=peft_path,
device="cuda",
select_layers_type="threshold",
save_weights=True,
)
final_lora_weight = apply_safelora(config)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a bit more to the example. For instance, how to save and load these weights?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added more descriptions to the example.
If you feel there are still any missing parts, please let me know.

```
### Save SafeLoRA's Weights
If you set `save_weights=False`, but still want to save the weights, you can use the following code.

```python
from safetensors.torch import save_file

path = ... # your PEFT model path
save_file(final_lora_weight, os.path.join(path, "adapter_model.safetensors"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, path would be the same as peft_path above, right? If so, let's use the same name.

```

### Use SafeLoRA Model
Next, you can load the base model of the Peft Model along with the Peft model itself to use a model that has both downstream task utility and alignment.

```python
from transformers import AutoModelForCausalLM
from peft import PeftConfig, PeftModel

model = AutoModelForCausalLM.from_pretrained(<base-model-id>)
model = PeftModel.from_pretrained(model, <SafeLoRA-path>)
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no mention of the safelora_inference.py script in this README. Let's mention this script with a short explanation of what it does.

28 changes: 28 additions & 0 deletions examples/safelora/safelora_inference.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
import os
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add the copyright notice.


from safetensors.torch import save_file
from transformers import AutoModelForCausalLM

from peft import PeftModel
from peft.utils.safelora import SafeLoraConfig, apply_safelora


peft_path = "../finetuneLLM/finetuned_models/samsumBad-7b-fp16-peft-seed-42"
config = SafeLoraConfig(
base_model_path="meta-llama/Llama-2-7b-hf",
aligned_model_path="TheBloke/Llama-2-7B-Chat-fp16",
peft_model_path=peft_path,
device="cuda",
select_layers_type="threshold",
save_weights=True,
)

final_lora_weight = apply_safelora(config)

save_file(
final_lora_weight,
f"{os.path.join('../finetuneLLM/finetuned_models/samsumBad-7b-fp16-peft-seed-42', 'adapter_model.safetensors')}",
)
Comment on lines +22 to +25
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be removed since we have already set save_weights=True above?


model = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7B-Chat-fp16")
model = PeftModel.from_pretrained(model, peft_path)
33 changes: 19 additions & 14 deletions src/peft/utils/loftq_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@

import torch
from huggingface_hub import snapshot_download
from huggingface_hub.errors import HFValidationError, LocalEntryNotFoundError
from huggingface_hub.errors import HFValidationError
from safetensors import SafetensorError, safe_open
from transformers.utils import cached_file
from transformers.utils.hub import get_checkpoint_shard_files
Expand Down Expand Up @@ -267,26 +267,31 @@ class _SafetensorLoader:

"""

def __init__(self, peft_model, model_path):
def __init__(self, peft_model_or_model_id, model_path=None, local_files_only=True):
if model_path is None:
try:
model_path = snapshot_download(peft_model.base_model.config._name_or_path, local_files_only=True)
except (AttributeError, HFValidationError) as exc:
raise ValueError(
"The provided model does not appear to be a transformers model or is a local model. In this case, "
"you must pass the model_path argument that points to the safetensors file."
) from exc
except LocalEntryNotFoundError as exc:
raise ValueError(
"The model.safetensors file must be present on disk, but it could not be found."
) from exc
if isinstance(peft_model_or_model_id, str):
name_or_path = peft_model_or_model_id
base_model_prefix = None
else:
name_or_path = peft_model_or_model_id
base_model_prefix = getattr(peft_model_or_model_id.get_base_model(), "base_model_prefix", None)
if os.path.exists(name_or_path):
model_path = name_or_path
else:
try:
model_path = snapshot_download(name_or_path, local_files_only=local_files_only)
except (AttributeError, HFValidationError):
raise ValueError(
"The provided model does not appear to be a transformers model or is a local model. In this case, "
"you must pass the model_path argument that points to the safetensors file."
)

suffix = "model.safetensors"
if not model_path.endswith(suffix):
model_path = os.path.join(model_path, suffix)

self.model_path = model_path
self.base_model_prefix = getattr(peft_model.get_base_model(), "base_model_prefix", None)
self.base_model_prefix = base_model_prefix
self.prefix = "base_model.model."
self.is_sharded = False
self.weight_map = None
Expand Down
256 changes: 256 additions & 0 deletions src/peft/utils/safelora.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,256 @@
# Copyright 2024-present the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Reference paper: https://arxiv.org/abs/2405.16833


import copy
import os
from dataclasses import dataclass, field
from typing import Literal

import torch
from safetensors import safe_open
from safetensors.torch import save_file

import peft
from peft import PeftConfig

from .loftq_utils import _SafetensorLoader as SafetensorLoader


@dataclass
class SafeLoraConfig:
"""
This is the configuration class to store the configuration of a [`safeLora`].
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding [safeLora]: There is no object of this name, right? Therefore, this syntax will not work. We could rephrase this to:

This is the configuration class to set the parameters for SafeLoRA.

Then add a link to the paper.


Args:
BenjaminBossan marked this conversation as resolved.
Show resolved Hide resolved
base_model_path (`str`):
The path of the base model for obtaining the aligned matrix. The base model should be one that has not
undergone training techniques such as RLHF or SFT, often referred to as an uncensored model. The path can
be either a local path or a Hugging Face model ID.

aligned_model_path (`str`):
The path of the aligned model for obtaining the aligned matrix. The aligned model should be one that has
been trained using techniques such as RLHF or SFT. The path can be either a local path or a Hugging Face
model ID.

peft_model_path (`str`):
The path of the LoRA weights and config.

select_layers_type (`Literal["threshold", "number"]`):
Specifies the method for selecting projection layers. The value must be a string and can only be either
"threshold" or "number". Use "threshold" to set a cosine similarity threshold or "number" to specify the
number of layers directly.

threshold (`float`):
The threshold of cosine similarity for selecting projected layers.

num_proj_layers (`int`):
The number of projected layers.

device (`str`):
Device that is used for SafeLoRA (cuda or cpu).

save_weights (`bool`):
Whether to override the original LoRA file with SafeLoRA weights.

local_files_only (`bool`):
Set this value to True to work only with local files (no downloads).

dtype (`torch.dtype`):
Data type for model weights, e.g., torch.float32 or torch.bfloat16.

"""

base_model_path: str = field(
metadata={"help": "The path of the base model for obtaining the aligned matrix."},
)

aligned_model_path: str = field(
metadata={"help": "The path of the aligned model for obtaining the aligned matrix."},
)

peft_model_path: str = field(
metadata={"help": "The path of the LoRA wieghts and configs."},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
metadata={"help": "The path of the LoRA wieghts and configs."},
metadata={"help": "The path of the LoRA weights and config."},

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo is still there.

)

select_layers_type: Literal["threshold", "number"] = field(
default="number",
metadata={"help": "How to select projection layers? options: [threshold, number]."},
)

threshold: float = field(
default=0.5,
metadata={"help": "The threshold of cosine similarity for selecting projected layers."},
)

num_proj_layers: int = field(
default=10,
metadata={"help": "The number of projected layers."},
)

device: str = field(
default="cuda",
metadata={"help": "Device that is used for SafeLoRA (cuda or cpu)."},
)

save_weights: bool = field(
default=True,
metadata={"help": "Whether to override the original LoRA file with SafeLoRA weights."},
)
local_files_only: bool = field(
default=False,
metadata={"help": "Set this value to True to work only with local files (no downloads)."},
)

dtype: torch.dtype = field(
default=torch.bfloat16,
metadata={
"help": "Data type for model weights, e.g., torch.float32 or torch.bfloat16. If your device is CPU, you should use torch.float32."
},
)

def __post_init__(self):
if self.base_model_path is None:
raise ValueError("base_model_path cannot be None.")
if self.aligned_model_path is None:
raise ValueError("aligned_model_path cannot be None.")
if self.peft_model_path is None:
raise ValueError("peft_model_path cannot be None.")
Comment on lines +125 to +130
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since there are no longer any default values for these fields, I don't believe we need to perform these checks anymore.



def get_aligned_matrix(base_model_path, aligned_model_path, peft_config, safelora_config):
"""
Get projected matrix by following the config (target_modules) from the peft model. The dimensions between the base
model's weights and the aligned model's weights should be the same.
"""
sl_align = SafetensorLoader(aligned_model_path, local_files_only=safelora_config.local_files_only)
sl_base = SafetensorLoader(base_model_path, local_files_only=safelora_config.local_files_only)

base_model_parameters = [
name for name in sl_base.weight_map.keys() if any(v in name for v in list(peft_config.target_modules))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check for v in list(peft_config.target_modules) is not always valid. E.g., target_modules can be a str in which case a regex match is executed. There can be other edge cases too. I think what should work is to use LoraModel._check_target_module_exists(peft_config, name) (here).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tried to use LoraModel._check_target_module_exists(peft_config, name).
However, since the model here is not a PEFT model but a complete model, using its name to find the target module will result in not finding it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this should work for non-PEFT models. It's a bit hard to check this, as we don't have unit tests yet, so let's keep this discussion open and revisit it later.

]

align_model_parameters = [
name for name in sl_align.weight_map.keys() if any(v in name for v in list(peft_config.target_modules))
]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we also check that base_model_parameters and align_model_parameters are the same?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added a check to verify if the model weights are the same.

+ if (sl_base.get_tensor(name_base) == sl_align.get_tensor(name_align)).all():
+        raise ValueError("The weights of the base Model and the aligned Model should be different.")

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant something else. Would we expect that base_model_parameters == align_model_parameters? If not, under what circumstances would they differ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still open.

safety_vector = []
for name_base, name_align in zip(base_model_parameters, align_model_parameters):
if sl_base.get_tensor(name_base).shape != sl_align.get_tensor(name_align).shape:
raise ValueError(
"The dimensions of the base model's weight should be the same with the aligned model's weight."
)
if (sl_base.get_tensor(name_base) == sl_align.get_tensor(name_align)).all():
raise ValueError("The weights of the base Model and the aligned Model should be different.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see this argument if all the weights were identical. But should we really expect that each one is different? Maybe the aligned model froze some layers only changed a few? I don't think we should raise an error in that case.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In our setting, the base model refers to a model that has not undergone training with techniques such as RLHF or SFT, while the aligned model refers to one that has been trained using these techniques. Currently, open-source models such as LLaMA 2 or 3, Mistral, and Gemma have released both base models and chat/instruct (aligned) models. Therefore, we expect the weights of the base model and the aligned model to differ.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the difference between aligned model and base model. However, it should be possible to align a model without changing each and every parameter, right? E.g., in the future we could a new model where the aligned model only changes half of the weights, I don't see why that couldn't be possible. Does the SafeLoRA algorithm really require each parameter to be different? Can we not skip the layers if the parameters are identical?

vec = sl_base.get_tensor(name_base) - sl_align.get_tensor(name_align)
vec = vec.to(dtype=safelora_config.dtype, device=safelora_config.device)
vec = torch.mm(vec, vec.t()) / torch.norm(vec)
safety_vector.append(vec.detach().cpu())
return safety_vector


def project_weights(safelora_config, peft_weights, v):
ori_peft_weights = copy.deepcopy(peft_weights)
vars_names_LoRA_A = [name for name in peft_weights.keys() if "lora_A" in name]
vars_names_LoRA_B = [name for name in peft_weights.keys() if "lora_B" in name]
num_projected_layers = 0
dis = []
cos_total = []
for idx, (name_A, name_B) in enumerate(zip(vars_names_LoRA_A, vars_names_LoRA_B)):
A = ori_peft_weights[name_A]
P = v[idx].to(safelora_config.dtype).to(safelora_config.device)
W = torch.mm(P, ori_peft_weights[name_B])
fW = torch.mm(W, A)
ori = torch.mm(ori_peft_weights[name_B], A)
cos = torch.nn.functional.cosine_similarity(fW.reshape(1, -1), ori.reshape(1, -1))
cos_total.append(cos.item())
if cos <= safelora_config.threshold:
num_projected_layers += 1
peft_weights[name_B] = W
else:
peft_weights[name_B] = ori_peft_weights[name_B]

dist = 1 / (1 + torch.norm(peft_weights[name_B].reshape(1, -1) - W.reshape(1, -1)))

dis.append(dist.item())
return peft_weights, cos_total


def apply_safelora(safelora_config: SafeLoraConfig):
"""

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change

The official code of Safe LoRA: The Silver Lining of Reducing Safety Risks when Finetuning Large Language Models:
https://arxiv.org/abs/2405.16833

After fine-tuning large language models (LLMs) using LoRA, the alignment of the resulting models may decrease.
Therefore, applying `apply_safelora()` is intended to help preserve the alignment of the final models.

It is important to note that the model weights of the aligned model and the base model must be of the same size.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's also mention that right now, only safetensors format is supported.

Additionally, for SafeLoRA, only the safetensors format is supported.

Args:
safelora_config: The config of SafeLora.

Returns:
`torch.nn.Module`: The Lora model is applied SafeLoRA.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is incorrect, the return value is a state_dict containing the PEFT weights.



Example:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use the docstring style for examples used elsewhere in PEFT, e.g. here:


```py
>>> from peft.utils.safelora import SafeLoraConfig, apply_safelora

>>> config = SafeLoraConfig(
... base_model_path="meta-llama/Llama-2-7b-hf",
... aligned_model_path="TheBloke/Llama-2-7B-Chat-fp16",
... peft_model_path="LisaSchunke/llama-2-7b-peft-finetuned-20000-dataset",
... device="cuda",
... select_layers_type="threshold",
... save_weights=True,
... )

>>> final_lora_weight = apply_safelora(config)
```

"""

peft_config = PeftConfig.from_pretrained(safelora_config.peft_model_path)
BenjaminBossan marked this conversation as resolved.
Show resolved Hide resolved
if peft_config.use_dora:
raise ValueError("SafeLoRA do not support DoRA.")

projected_matrix = get_aligned_matrix(
safelora_config.base_model_path, safelora_config.aligned_model_path, peft_config, safelora_config
)

with safe_open(
f"{os.path.join(safelora_config.peft_model_path, peft.utils.constants.SAFETENSORS_WEIGHTS_NAME)}",
framework="pt",
device=safelora_config.device,
) as f:
peft_weights = {name: f.get_tensor(name).to(safelora_config.dtype) for name in f.keys()}
if safelora_config.select_layers_type == "threshold":
final_weights, _ = project_weights(safelora_config, peft_weights, projected_matrix)
elif safelora_config.select_layers_type == "number":
_, cos = project_weights(safelora_config, peft_weights, projected_matrix)
thrs = torch.sort(torch.Tensor(cos))[0][: safelora_config.num_proj_layers][-1]
safelora_config.threshold = thrs
final_weights, _ = project_weights(safelora_config, peft_weights, projected_matrix)

if safelora_config.save_weights:
save_file(
final_weights,
f"{os.path.join(safelora_config.peft_model_path, peft.utils.constants.SAFETENSORS_WEIGHTS_NAME)}",
)

return final_weights