-
Notifications
You must be signed in to change notification settings - Fork 27.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deepspeed zero stage 3 not compatible with bnb qunatization, leads to shape error #29266
Comments
Hi @Qizhang-Feng |
Hello, yes,DeepSpeed and bitsandbytes aren't compatible with each other. |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Hi ! |
System Info
transformers
version: 4.38.1- distributed_type: DEEPSPEED
- mixed_precision: fp16
- use_cpu: False
- debug: True
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': True, 'zero3_save_16bit_model': True, 'zero_stage': 3}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
Who can help?
@pacman100 @SunMarc @younesbelkada
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
deepspeed config yaml
return model_class.from_pretrained(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3502, in from_pretrained
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/modeling_utils.py", line 805, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 345, in set_module_tensor_to_device
) = cls._load_pretrained_model(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3926, in _load_pretrained_model
raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([32000, 4096]) in "weight" (which has shape torch.Size([0])), this look incorrect.
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/modeling_utils.py", line 805, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 345, in set_module_tensor_to_device
raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([32000, 4096]) in "weight" (which has shape torch.Size([0])), this look incorrect.
Expected behavior
Hello,
I encountered an issue while attempting to load the LLaMA 2 7B model using the bnb quantization configuration within the DeepSpeed Zero Stage 3 setup. An error occurs during the process, which I suspect might be related to the model's placeholder. However, I am uncertain whether this behavior is indicative of a bug or an intended feature.
Could you please clarify if this is a known issue or if there are any suggested workarounds? Thank you!
The text was updated successfully, but these errors were encountered: