Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization #609

Open
mj2688 opened this issue Nov 27, 2023 · 15 comments

Comments

@mj2688
Copy link

mj2688 commented Nov 27, 2023

When I fine-tune llama2-7B with LoRa, the following error occurs:
Traceback (most recent call last):
File "/home/ubuntu/lora/alpaca-lora-main/finetune.py", line 290, in
fire.Fire(train)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/home/ubuntu/lora/alpaca-lora-main/finetune.py", line 280, in train
trainer.train()
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/transformers/trainer.py", line 1555, in train
return inner_training_loop(
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/transformers/trainer.py", line 1965, in _inner_training_loop
self._load_best_model()
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/transformers/trainer.py", line 2184, in _load_best_model
model.load_adapter(self.state.best_model_checkpoint, model.active_adapter)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/peft/peft_model.py", line 629, in load_adapter
adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/peft/utils/save_and_load.py", line 222, in load_peft_weights
adapters_weights = safe_load_file(filename, device=device)
File "/home/ubuntu/anaconda3/envs/loar/lib/python3.9/site-packages/safetensors/torch.py", line 308, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization

image

And in checkpoint-1000, adapter_model.safetensors is saved in the .safetensors format. I checked the official fine-tuning weights, and they are in the adapter_model.bin format. Why is that?

@RiccardoIzzo
Copy link

I'm currently having the same problem. Are you using a well known dataset (such as alpaca) or a custom one? @mj2688 By the way I noticed that with few epochs this doesn't happen.

@mj2688
Copy link
Author

mj2688 commented Nov 29, 2023

I'm currently having the same problem. Are you using a well known dataset (such as alpaca) or a custom one? @mj2688 By the way I noticed that with few epochs this doesn't happen.
I use the same dataset in the example code(finetune.py).

data_path: str = "yahma/alpaca-cleaned"

I referred to this tutorial and deleted 'torch.compile' in finetune.py , but it still not works.
huggingface/transformers#27397

@RiccardoIzzo
Copy link

You can find the fix reported in this issue. This solved the InvalidHeaderDeserialization error for me.

@Concyclics
Copy link

Have u fix this problem? I'm current facing the same?

@RiccardoIzzo
Copy link

Yes, I solved. You have to comment these lines in finetune.py.
The reason is that currently there is an incompatibility between PyTorch and PEFT library as reported here.

@mj2688
Copy link
Author

mj2688 commented Dec 8, 2023

Yes, I solved. You have to comment these lines in finetune.py. The reason is that currently there is an incompatibility between PyTorch and PEFT library as reported here.

thanks,i also solved!

@mj2688
Copy link
Author

mj2688 commented Dec 8, 2023

Have u fix this problem? I'm current facing the same?

delete some codes in finetune.py :
model.state_dict = (
lambda self, *_, **__: get_peft_model_state_dict(
self, old_state_dict()
)
).get(model, type(model))

if torch.__version__ >= "2" and sys.platform != "win32":
    model = torch.compile(model)

@nihaowz
Copy link

nihaowz commented Dec 25, 2023

Have u fix this problem? I'm current facing the same?

delete some codes in finetune.py : model.state_dict = ( lambda self, *_, **__: get_peft_model_state_dict( self, old_state_dict() ) ).get(model, type(model))

if torch.__version__ >= "2" and sys.platform != "win32":
    model = torch.compile(model)

But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization

@MING8276
Copy link

MING8276 commented Jan 2, 2024

Have u fix this problem? I'm current facing the same?

delete some codes in finetune.py : model.state_dict = ( lambda self, *_, **__: get_peft_model_state_dict( self, old_state_dict() ) ).get(model, type(model))

if torch.__version__ >= "2" and sys.platform != "win32":
    model = torch.compile(model)

But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization

Hi. I think the .safetensors file is not compatible with PEFT,so I delete the xx.safetensors file and it works.

@nihaowz
Copy link

nihaowz commented Jan 8, 2024

Have u fix this problem? I'm current facing the same?

delete some codes in finetune.py : model.state_dict = ( lambda self, *_, **__: get_peft_model_state_dict( self, old_state_dict() ) ).get(model, type(model))

if torch.__version__ >= "2" and sys.platform != "win32":
    model = torch.compile(model)

But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization

Hi. I think the .safetensors file is not compatible with PEFT,so I delete the xx.safetensors file and it works.

Do you mean to delete the file after fine-tuning and run it now.Is this file an adapter_ model. safetensors

@mj2688
Copy link
Author

mj2688 commented Jan 8, 2024

Have u fix this problem? I'm current facing the same?

delete some codes in finetune.py : model.state_dict = ( lambda self, *_, **__: get_peft_model_state_dict( self, old_state_dict() ) ).get(model, type(model))

if torch.__version__ >= "2" and sys.platform != "win32":
    model = torch.compile(model)

But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization

Hi. I think the .safetensors file is not compatible with PEFT,so I delete the xx.safetensors file and it works.

Do you mean to delete the file after fine-tuning and run it now.Is this file an adapter_ model. safetensors

Before fine-tuning,i delete this,then it works:
model.state_dict = (
lambda self, *_, **__: get_peft_model_state_dict(
self, old_state_dict()
)
).get(model, type(model))

if torch.version >= "2" and sys.platform != "win32":
model = torch.compile(model)

@nihaowz
Copy link

nihaowz commented Jan 8, 2024

Have u fix this problem? I'm current facing the same?

delete some codes in finetune.py : model.state_dict = ( lambda self, *_, **__: get_peft_model_state_dict( self, old_state_dict() ) ).get(model, type(model))

if torch.__version__ >= "2" and sys.platform != "win32":
    model = torch.compile(model)

But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization

Hi. I think the .safetensors file is not compatible with PEFT,so I delete the xx.safetensors file and it works.

Do you mean to delete the file after fine-tuning and run it now.Is this file an adapter_ model. safetensors

Before fine-tuning,i delete this,then it works: model.state_dict = ( lambda self, *_, **__: get_peft_model_state_dict( self, old_state_dict() ) ).get(model, type(model))

if torch.version >= "2" and sys.platform != "win32": model = torch.compile(model)

I've tried it before, but it still doesn't work

@tamanna-mostafa
Copy link

I'm having this same issue (details here: huggingface/transformers#28742). Could anyone please help?

@tamanna-mostafa
Copy link

Have u fix this problem? I'm current facing the same?

delete some codes in finetune.py : model.state_dict = ( lambda self, *_, **__: get_peft_model_state_dict( self, old_state_dict() ) ).get(model, type(model))

if torch.__version__ >= "2" and sys.platform != "win32":
    model = torch.compile(model)

But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization

Hi. I think the .safetensors file is not compatible with PEFT,so I delete the xx.safetensors file and it works.

@MING8276 Would you mind telling me what files you deleted?

@SHUSHENGQIGUI
Copy link

Have u fix this problem? I'm current facing the same?

delete some codes in finetune.py : model.state_dict = ( lambda self, *_, **__: get_peft_model_state_dict( self, old_state_dict() ) ).get(model, type(model))

if torch.__version__ >= "2" and sys.platform != "win32":
    model = torch.compile(model)

But if I delete the above, it still exists safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization

do you solve this problem? i meet the same problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants