Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All layer weights not getting saved when using accelerate + FSDP for OPT 2.7B #1054

Closed
2 of 4 tasks
lvnair3 opened this issue Feb 9, 2023 · 6 comments
Closed
2 of 4 tasks
Labels
solved The bug or feature request has been solved, but the issue is still opened

Comments

@lvnair3
Copy link

lvnair3 commented Feb 9, 2023

System Info

- `Accelerate` version: 0.16.0
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Numpy version: 1.24.2
- PyTorch version (GPU?): 1.13.0+cu116 (True)

`Accelerate` default config:
        - compute_environment: LOCAL_MACHINE
        - distributed_type: FSDP
        - mixed_precision: no
        - use_cpu: False
        - dynamo_backend: NO
        - num_processes: 2
        - machine_rank: 0
        - num_machines: 1
        - rdzv_backend: static
        - same_network: True
        - main_training_function: main
        - deepspeed_config: {}
        - fsdp_config: {'fsdp_auto_wrap_policy': 'SIZE_BASED_WRAP', 'fsdp_backward_prefetch_policy': 'BACKWARD_PRE', 'fsdp_min_num_params': 1000000, 'fsdp_offload_params': True, 'fsdp_sharding_strategy': 1, 'fsdp_state_dict_type': 'FULL_STATE_DICT'}
        - megatron_lm_config: {}
        - downcast_bf16: no

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • One of the scripts in the examples/ folder of Accelerate or an officially supported no_trainer script in the examples folder of the transformers repo (such as run_no_trainer_glue.py)
  • My own task or dataset (give details below)

Reproduction

SETUP

Task: Finetuning facebook/opt-2.7b for Wikitext2

Code is: https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py

Dataset is: Wikitext2

The command used to run this is as follows:

accelerate launch run_clm_no_trainer.py \
    --model_name_or_path facebook/opt-2.7b \
    --dataset_name wikitext \
    --dataset_config_name wikitext-2-raw-v1 \
    --per_device_train_batch_size 2 \
    --per_device_eval_batch_size 8 \
    --output_dir ./opt-2.7B-FP32 \
    --checkpointing_steps epoch \
    --max_train_steps 3

ERROR
Error occurs when loading the resultant checkpoint into the evaluation code here: https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py

The command used to perform evaluation is:

CUDA_VISIBLE_DEVICES=0 python -u run_clm.py \
    --model_name_or_path ./opt-2.7B-FP32 \
    --dataset_name wikitext \
    --dataset_config_name wikitext-2-raw-v1 \
    --per_device_train_batch_size 4 \
    --per_device_eval_batch_size 2 \
    --output_dir ./test-output \
    --do_eval

Error message is that certain weights were not initialized:

>> Some weights of OPTForCausalLM were not initialized from the model checkpoint at ./opt-2.7B-FP32 and are newly initialized: ['model.decoder.layers.22.final_layer_norm.bias', 'model.decoder.layers.23.self_attn_layer_norm.bias', 'model.decoder.layers.3.final_layer_norm.weight', 'model.decoder.layers.27.self_attn_layer_norm.weight', 'model.decoder.layers.21.final_layer_norm.weight', 'model.decoder.layers.24.final_layer_norm.weight', 'model.decoder.layers.29.self_attn_layer_norm.weight', 'model.decoder.layers.13.final_layer_norm.bias', 'model.decoder.layers.11.final_layer_norm.bias', 'model.decoder.layers.14.self_attn_layer_norm.bias', 'model.decoder.layers.0.self_attn_layer_norm.weight', 'model.decoder.layers.18.final_layer_norm.bias', 'model.decoder.layers.2.self_attn_layer_norm.weight', 'model.decoder.layers.14.self_attn_layer_norm.weight', 'model.decoder.layers.19.self_attn_layer_norm.bias', 'model.decoder.layers.28.self_attn_layer_norm.weight', 'model.decoder.layers.22.self_attn_layer_norm.bias', 'model.decoder.layers.2.final_layer_norm.bias', 'model.decoder.layers.8.self_attn_layer_norm.weight', 'model.decoder.layers.16.self_attn_layer_norm.bias', 'model.decoder.layers.22.self_attn_layer_norm.weight', 'model.decoder.layers.1.final_layer_norm.weight', 'model.decoder.layers.7.final_layer_norm.weight', 'model.decoder.layers.1.self_attn_layer_norm.weight', 'model.decoder.layers.19.final_layer_norm.weight', 'model.decoder.layers.13.self_attn_layer_norm.bias', 'model.decoder.layers.18.self_attn_layer_norm.weight', 'model.decoder.layers.31.final_layer_norm.weight', 'model.decoder.layers.29.self_attn_layer_norm.bias', 'model.decoder.layers.11.self_attn_layer_norm.bias', 'model.decoder.layers.1.self_attn_layer_norm.bias', 'model.decoder.layers.3.self_attn_layer_norm.bias', 'model.decoder.layers.20.final_layer_norm.bias', 'model.decoder.layers.7.self_attn_layer_norm.bias', 'model.decoder.layers.0.final_layer_norm.bias', 'model.decoder.layers.13.final_layer_norm.weight', 'model.decoder.layers.31.self_attn_layer_norm.bias', 'model.decoder.layers.4.final_layer_norm.weight', 'model.decoder.layers.17.final_layer_norm.weight', 'model.decoder.layers.30.self_attn_layer_norm.weight', 'model.decoder.layers.18.self_attn_layer_norm.bias', 'model.decoder.layers.25.self_attn_layer_norm.bias', 'model.decoder.layers.26.self_attn_layer_norm.bias', 'model.decoder.final_layer_norm.weight', 'model.decoder.layers.26.final_layer_norm.weight', 'model.decoder.layers.25.self_attn_layer_norm.weight', 'model.decoder.layers.31.final_layer_norm.bias', 'model.decoder.layers.3.final_layer_norm.bias', 'model.decoder.layers.8.self_attn_layer_norm.bias', 'model.decoder.layers.4.self_attn_layer_norm.bias', 'model.decoder.layers.29.final_layer_norm.weight', 'model.decoder.layers.16.final_layer_norm.weight', 'model.decoder.layers.28.final_layer_norm.weight', 'model.decoder.layers.11.final_layer_norm.weight', 'model.decoder.layers.20.final_layer_norm.weight', 'model.decoder.layers.3.self_attn_layer_norm.weight', 'model.decoder.layers.15.final_layer_norm.weight', 'model.decoder.layers.7.final_layer_norm.bias', 'model.decoder.layers.16.self_attn_layer_norm.weight', 'model.decoder.layers.11.self_attn_layer_norm.weight', 'model.decoder.final_layer_norm.bias', 'model.decoder.layers.6.self_attn_layer_norm.bias', 'model.decoder.layers.6.final_layer_norm.weight', 'model.decoder.layers.30.final_layer_norm.bias', 'model.decoder.layers.29.final_layer_norm.bias', 'model.decoder.layers.25.final_layer_norm.bias', 'model.decoder.layers.19.final_layer_norm.bias', 'model.decoder.layers.15.self_attn_layer_norm.bias', 'model.decoder.layers.9.self_attn_layer_norm.bias', 'model.decoder.layers.12.final_layer_norm.bias', 'model.decoder.layers.24.self_attn_layer_norm.bias', 'model.decoder.layers.5.self_attn_layer_norm.bias', 'model.decoder.layers.15.final_layer_norm.bias', 'model.decoder.layers.27.self_attn_layer_norm.bias', 'model.decoder.layers.20.self_attn_layer_norm.bias', 'model.decoder.layers.8.final_layer_norm.weight', 'model.decoder.layers.10.final_layer_norm.bias', 'model.decoder.layers.10.final_layer_norm.weight', 'model.decoder.layers.15.self_attn_layer_norm.weight', 'model.decoder.layers.2.self_attn_layer_norm.bias', 'model.decoder.layers.21.final_layer_norm.bias', 'model.decoder.layers.23.final_layer_norm.bias', 'model.decoder.layers.12.self_attn_layer_norm.weight', 'model.decoder.layers.25.final_layer_norm.weight', 'model.decoder.layers.17.final_layer_norm.bias', 'model.decoder.layers.12.self_attn_layer_norm.bias', 'model.decoder.layers.21.self_attn_layer_norm.bias', 'model.decoder.layers.5.self_attn_layer_norm.weight', 'model.decoder.layers.8.final_layer_norm.bias', 'model.decoder.layers.21.self_attn_layer_norm.weight', 'model.decoder.layers.28.self_attn_layer_norm.bias', 'model.decoder.layers.24.self_attn_layer_norm.weight', 'model.decoder.layers.4.self_attn_layer_norm.weight', 'model.decoder.layers.0.final_layer_norm.weight', 'model.decoder.layers.6.final_layer_norm.bias', 'model.decoder.layers.5.final_layer_norm.weight', 'model.decoder.layers.10.self_attn_layer_norm.weight', 'model.decoder.layers.24.final_layer_norm.bias', 'model.decoder.layers.27.final_layer_norm.bias', 'model.decoder.layers.5.final_layer_norm.bias', 'model.decoder.layers.31.self_attn_layer_norm.weight', 'model.decoder.layers.30.self_attn_layer_norm.bias', 'model.decoder.layers.26.final_layer_norm.bias', 'model.decoder.layers.9.final_layer_norm.bias', 'model.decoder.layers.14.final_layer_norm.bias', 'model.decoder.layers.28.final_layer_norm.bias', 'model.decoder.layers.26.self_attn_layer_norm.weight', 'model.decoder.layers.17.self_attn_layer_norm.weight', 'model.decoder.layers.23.final_layer_norm.weight', 'model.decoder.layers.27.final_layer_norm.weight', 'model.decoder.layers.17.self_attn_layer_norm.bias', 'model.decoder.layers.19.self_attn_layer_norm.weight', 'model.decoder.layers.4.final_layer_norm.bias', 'model.decoder.layers.1.final_layer_norm.bias', 'model.decoder.layers.12.final_layer_norm.weight', 'model.decoder.layers.2.final_layer_norm.weight', 'model.decoder.layers.6.self_attn_layer_norm.weight', 'model.decoder.layers.13.self_attn_layer_norm.weight', 'model.decoder.layers.14.final_layer_norm.weight', 'model.decoder.layers.9.self_attn_layer_norm.weight', 'model.decoder.layers.0.self_attn_layer_norm.bias', 'model.decoder.layers.16.final_layer_norm.bias', 'model.decoder.layers.18.final_layer_norm.weight', 'model.decoder.layers.23.self_attn_layer_norm.weight', 'model.decoder.layers.9.final_layer_norm.weight', 'model.decoder.layers.7.self_attn_layer_norm.weight', 'model.decoder.layers.22.final_layer_norm.weight', 'model.decoder.layers.20.self_attn_layer_norm.weight', 'model.decoder.layers.30.final_layer_norm.weight', 'model.decoder.layers.10.self_attn_layer_norm.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

The sharded model checkpoint folder has the following pytorch_model.bin.index.json file:

{
  "metadata": {
    "total_size": 11119841280
  },
  "weight_map": {
    "lm_head.weight": "pytorch_model-00002-of-00002.bin",
    "model.decoder.embed_positions.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.embed_tokens.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.0.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.0.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.0.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.0.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.0.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.0.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.0.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.0.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.0.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.1.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.1.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.1.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.1.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.1.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.1.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.1.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.1.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.1.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.10.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.10.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.10.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.10.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.10.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.10.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.10.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.10.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.10.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.11.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.11.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.11.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.11.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.11.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.11.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.11.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.11.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.11.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.11.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.11.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.11.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.12.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.12.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.12.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.12.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.12.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.12.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.12.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.12.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.12.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.12.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.12.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.12.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.13.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.13.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.13.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.13.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.13.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.13.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.13.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.13.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.13.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.13.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.13.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.13.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.14.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.14.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.14.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.14.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.14.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.14.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.14.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.14.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.14.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.14.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.14.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.14.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.15.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.15.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.15.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.15.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.15.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.15.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.15.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.15.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.15.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.15.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.15.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.15.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.16.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.16.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.16.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.16.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.16.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.16.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.16.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.16.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.16.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.16.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.16.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.16.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.17.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.17.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.17.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.17.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.17.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.17.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.17.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.17.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.17.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.17.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.17.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.17.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.18.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.18.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.18.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.18.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.18.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.18.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.18.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.18.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.18.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.18.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.18.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.18.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.19.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.19.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.19.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.19.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.19.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.19.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.19.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.19.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.19.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.19.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.19.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.19.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.2.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.2.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.2.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.2.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.2.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.2.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.2.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.2.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.2.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.20.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.20.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.20.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.20.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.20.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.20.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.20.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.20.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.20.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.20.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.20.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.20.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.21.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.21.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.21.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.21.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.21.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.21.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.21.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.21.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.21.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.21.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.21.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.21.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.22.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.22.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.22.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.22.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.22.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.22.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.22.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.22.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.22.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.22.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.22.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.22.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.23.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.23.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.23.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.23.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.23.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.23.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.23.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.23.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.23.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.23.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.23.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.23.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.24.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.24.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.24.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.24.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.24.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.24.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.24.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.24.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.24.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.24.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.24.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.24.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.25.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.25.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.25.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.25.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.25.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.25.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.25.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.25.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.25.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.25.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.25.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.25.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.26.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.26.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.26.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.26.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.26.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.26.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.26.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.26.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.26.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.26.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.26.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.26.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.27.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.27.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.27.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.27.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.27.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.27.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.27.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.27.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.27.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.27.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.27.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.27.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.28.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.28.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.28.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.28.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.28.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.28.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.28.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.28.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.28.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.28.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.28.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.28.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.29.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.29.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.29.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.29.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.29.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.29.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.29.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.29.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.29.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.29.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.29.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.29.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.3.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.3.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.3.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.3.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.3.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.3.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.3.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.3.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.3.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.30.fc1.bias": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.30.fc1.weight": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.30.fc2.bias": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.30.fc2.weight": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.30.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.30.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.30.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.30.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.30.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.30.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.30.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.30.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.31.fc1.bias": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.31.fc1.weight": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.31.fc2.bias": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.31.fc2.weight": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.31.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.31.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.31.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.31.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.31.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.31.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.31.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.31.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
    "model.decoder.layers.4.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.4.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.4.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.4.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.4.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.4.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.4.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.4.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.4.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.5.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.5.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.5.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.5.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.5.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.5.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.5.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.5.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.5.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.6.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.6.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.6.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.6.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.6.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.6.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.6.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.6.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.6.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.7.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.7.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.7.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.7.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.7.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.7.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.7.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.7.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.7.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.8.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.8.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.8.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.8.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.8.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.8.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.8.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.8.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.8.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.9.fc1.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.9.fc1.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.9.fc2.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.9.fc2.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.9.self_attn.k_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.9.self_attn.out_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.9.self_attn.out_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.9.self_attn.q_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.9.self_attn.v_proj.bias": "pytorch_model-00001-of-00002.bin",
    "model.decoder.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin"
  }
}

Expected behavior

Would expect the trained OPT-2.7B checkpoint to load all the model weights correctly.
@sgugger
Copy link
Collaborator

sgugger commented Feb 10, 2023

cc @pacman100

@pacman100
Copy link
Contributor

Hello @lvnair3, could you change the following in the run_clm_no_trainer.py and let us know if that fixes the issue:

unwrapped_model.save_pretrained(
            args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save,
+ state_dict=accelerator.get_state_dict(model)
        )

@lvnair3
Copy link
Author

lvnair3 commented Feb 10, 2023

@pacman100 That worked! Thank you!

@pacman100
Copy link
Contributor

Great! Will document this in the Accelerate docs and i teractive code explorer tool here https://huggingface.co/docs/accelerate/usage_guides/explore for preventing others from facing this issue.

Feel free to close this issue 😄

@pacman100 pacman100 added the solved The bug or feature request has been solved, but the issue is still opened label Feb 10, 2023
@lvnair3 lvnair3 closed this as completed Feb 10, 2023
@valentas-kurauskas
Copy link

Saving in other scripts is broken too and the proposed change fixes them. For example run_summarization_no_trainer.py: ? I could make a PR, but I don't have a permission.

@muellerzr
Copy link
Collaborator

@valentas-kurauskas you need to fork the repo and open a PR to do so. (And would be welcomed!)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved The bug or feature request has been solved, but the issue is still opened
Projects
None yet
Development

No branches or pull requests

5 participants