Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

单机多卡Chatglm3 PPO过程中RM模型token加载资源抢占 #1570

Closed
1 task done
Maxpa1n opened this issue Nov 20, 2023 · 3 comments
Closed
1 task done

单机多卡Chatglm3 PPO过程中RM模型token加载资源抢占 #1570

Maxpa1n opened this issue Nov 20, 2023 · 3 comments
Labels
duplicate This issue or pull request already exists

Comments

@Maxpa1n
Copy link

Maxpa1n commented Nov 20, 2023

Reminder

  • I have read the README and searched the existing issues.

Reproduction

单机多卡Chatglm3 PPO过程中RM模型token加载资源抢占发送错误

11/20/2023 14:42:03 - INFO - llmtuner.model.adapter - Loaded fine-tuned model from checkpoint(s): /home/user123/model/chatglm3-alpaca-exp
Detected kernel version 3.10.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
11/20/2023 14:42:03 - WARNING - llmtuner.model.utils - Provided path (/home/user123/model/chatglm3-alpaca-exp) does not contain valuehead weights.
11/20/2023 14:42:03 - INFO - llmtuner.model.loader - trainable params: 0 || all params: 6243588097 || trainable%: 0.0000
11/20/2023 14:42:03 - INFO - llmtuner.model.loader - This IS expected that the trainable params is 0 if you are using model for inference only.
11/20/2023 14:42:03 - INFO - llmtuner.train.utils - Created reference model from the model itself.
****0********
[INFO|tokenization_utils_base.py:2013] 2023-11-20 14:42:03,043 >> loading file tokenizer.model
[INFO|tokenization_utils_base.py:2013] 2023-11-20 14:42:03,043 >> loading file added_tokens.json
[INFO|tokenization_utils_base.py:2013] 2023-11-20 14:42:03,043 >> loading file special_tokens_map.json
[INFO|tokenization_utils_base.py:2013] 2023-11-20 14:42:03,043 >> loading file tokenizer_config.json
[INFO|tokenization_utils_base.py:2013] 2023-11-20 14:42:03,043 >> loading file tokenizer.json
Traceback (most recent call last):
  File "/home/user123/project/LLaMA-Factory-main/src/train_bash.py", line 14, in <module>
    main()
  File "/home/user123/project/LLaMA-Factory-main/src/train_bash.py", line 5, in main
    run_exp()
  File "/home/user123/project/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 30, in run_exp
    run_ppo(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
  File "/home/user123/project/LLaMA-Factory-main/src/llmtuner/train/ppo/workflow.py", line 39, in run_ppo
    reward_model = create_reward_model(model, model_args, finetuning_args)
  File "/home/user123/project/LLaMA-Factory-main/src/llmtuner/train/utils.py", line 77, in create_reward_model
    reward_model, _ = load_model_and_tokenizer(reward_model_args, reward_finetuning_args, is_trainable=False, stage="ppo")
  File "/home/user123/project/LLaMA-Factory-main/src/llmtuner/model/loader.py", line 75, in load_model_and_tokenizer
    tokenizer = AutoTokenizer.from_pretrained(
  File "/home/user123/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 738, in from_pretrained
    return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
  File "/home/user123/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2017, in from_pretrained
    return cls._from_pretrained(
  File "/home/user123/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2249, in _from_pretrained
    tokenizer = cls(*init_inputs, **init_kwargs)
  File "/home/user123/.cache/huggingface/modules/transformers_modules/chatglm3-rm/tokenization_chatglm.py", line 93, in __init__
    super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, **kwargs)
  File "/home/user123/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 363, in __init__
    super().__init__(**kwargs)
  File "/home/user123/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1604, in __init__
    super().__init__(**kwargs)
  File "/home/user123/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 861, in __init__
    setattr(self, key, value)
AttributeError: can't set attribute 'eos_token'
Running tokenizer on dataset:  16%|██████████████▉                                                                            | 8000/48818 [00:06<00:31, 1308.07 examples/s][2023-11-20 14:42:05,081] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 57651
[2023-11-20 14:42:05,082] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 57652
Running tokenizer on dataset:  18%|████████████████▊                                                                          | 9000/48818 [00:06<00:28, 1390.41 examples/s][2023-11-20 14:42:05,382] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 57653
[2023-11-20 14:42:05,621] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 57654
Running tokenizer on dataset:  18%|████████████████▊                                                                          | 9000/48818 [00:06<00:29, 1344.26 examples/s][2023-11-20 14:42:05,899] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 57655
Running tokenizer on dataset:  20%|██████████████████▍                                                                       | 10000/48818 [00:07<00:27, 1392.86 examples/s][2023-11-20 14:42:06,176] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 57656
Running tokenizer on dataset:  20%|██████████████████▍                                                                       | 10000/48818 [00:07<00:28, 1371.33 examples/s][2023-11-20 14:42:06,454] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 57657
Running tokenizer on dataset:  23%|████████████████████▎                                                                     | 11000/48818 [00:07<00:27, 1396.45 examples/s][2023-11-20 14:42:06,732] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 57658
[2023-11-20 14:42:07,009] [ERROR] [launch.py:321:sigkill_handler] ['/home/user123/miniconda3/envs/llm/bin/python', '-u', 'src/train_bash.py', '--local_rank=7', '--deepspeed', 'ds/ds_config.json', '--stage', 'ppo', '--model_name_or_path', '/home/user123/model/chatglm3-6b-base', '--do_train', '--dataset', 'alpaca_gpt4_zh', '--template', 'chatglm3', '--resume_lora_training', 'False', '--finetuning_type', 'full', '--reward_model_type', 'full', '--checkpoint_dir', '/home/user123/model/chatglm3-alpaca-exp', '--reward_model', '/home/user123/model/chatglm3-rm', '--output_dir', '/home/user123/model/chatglm3-ppo/', '--overwrite_cache', '--per_device_train_batch_size', '2', '--gradient_accumulation_steps', '4', '--lr_scheduler_type', 'cosine', '--logging_steps', '10', '--save_steps', '1000', '--learning_rate', '1e-5', '--num_train_epochs', '1.0', '--plot_loss', '--fp16'] exits with return code = 1

RW模型目录

-rw-rw-r--. 1 user123 user123         166 Nov 20 11:52 all_results.json
-rw-rw-r--. 1 user123 user123        1471 Nov 20 11:52 config.json
-rw-rw-r--. 1 user123 user123        2332 Nov 20 11:52 configuration_chatglm.py
-rw-rw-r--. 1 user123 user123         111 Nov 20 11:52 generation_config.json
-rw-rw-r--. 1 user123 user123 13019923846 Nov 20 11:52 pytorch_model.bin
-rw-rw-r--. 1 user123 user123        1248 Nov 20 11:52 README.md
-rw-rw-r--. 1 user123 user123         331 Nov 20 11:52 special_tokens_map.json
-rw-rw-r--. 1 user123 user123       11279 Nov 20 11:52 tokenization_chatglm.py
-rw-rw-r--. 1 user123 user123         893 Nov 20 11:52 tokenizer_config.json
-rw-rw-r--. 1 user123 user123     1018370 Nov 20 11:52 tokenizer.model
-rw-rw-r--. 1 user123 user123       14085 Nov 20 11:52 trainer_log.jsonl
-rw-rw-r--. 1 user123 user123        7343 Nov 20 11:52 trainer_state.json
-rw-rw-r--. 1 user123 user123        5880 Nov 20 11:52 training_args.bin
-rw-rw-r--. 1 user123 user123       34661 Nov 20 11:52 training_loss.png
-rw-rw-r--. 1 user123 user123         166 Nov 20 11:52 train_results.json

训练参数
deepspeed --num_gpus 8 src/train_bash.py
--deepspeed ds/ds_config.json
--stage ppo
--model_name_or_path /home/user123/model/chatglm3-6b-base
--do_train
--dataset alpaca_gpt4_zh
--template chatglm3
--resume_lora_training False
--finetuning_type full
--reward_model_type full
--checkpoint_dir /home/user123/model/chatglm3-alpaca-exp
--reward_model /home/user123/model/chatglm3-rm
--output_dir /home/user123/model/chatglm3-ppo/
--overwrite_cache
--per_device_train_batch_size 2
--gradient_accumulation_steps 4
--lr_scheduler_type cosine
--logging_steps 10
--save_steps 1000
--learning_rate 1e-5
--num_train_epochs 1.0
--plot_loss
--fp16

Expected behavior

No response

System Info

No response

Others

No response

@Maxpa1n
Copy link
Author

Maxpa1n commented Nov 20, 2023

单卡也是同样错误

CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
    --stage ppo \
    --model_name_or_path /home/user123/model/chatglm3-6b-base \
    --do_train \
    --dataset alpaca_gpt4_zh \
    --template chatglm3 \
    --finetuning_type full \
    --reward_model_type full \
    --checkpoint_dir /home/user123/model/chatglm3-alpaca-exp \
    --reward_model /home/user123/model/chatglm3-rm \
    --output_dir /home/user123/model/chatglm3-ppo \
    --per_device_train_batch_size 2 \
    --gradient_accumulation_steps 4 \
    --lr_scheduler_type cosine \
    --logging_steps 10 \
    --save_steps 1000 \
    --learning_rate 1e-5 \
    --num_train_epochs 1.0 \
    --plot_loss \
    --fp16

@Maxpa1n
Copy link
Author

Maxpa1n commented Nov 20, 2023

这个demo同样错误

from transformers import AutoModel, AutoTokenizer

# Load the model and tokenizer
#model = AutoModel.from_pretrained("/home/user123/model/chatglm3-rm", device_map="auto", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("/home/user123/model/chatglm3-rm", trust_remote_code=True)

#print(model)
print(tokenizer)

错误

Traceback (most recent call last):
  File "/home/user123/project/LLaMA-Factory-main/pred.py", line 5, in <module>
    tokenizer = AutoTokenizer.from_pretrained("/home/user123/model/chatglm3-rm", trust_remote_code=True)
  File "/home/user123/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 738, in from_pretrained
    return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
  File "/home/user123/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2017, in from_pretrained
    return cls._from_pretrained(
  File "/home/user123/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2249, in _from_pretrained
    tokenizer = cls(*init_inputs, **init_kwargs)
  File "/home/user123/.cache/huggingface/modules/transformers_modules/chatglm3-rm/tokenization_chatglm.py", line 93, in __init__
    super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, **kwargs)
  File "/home/user123/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 363, in __init__
    super().__init__(**kwargs)
  File "/home/user123/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1604, in __init__
    super().__init__(**kwargs)
  File "/home/user123/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 861, in __init__
    setattr(self, key, value)
AttributeError: can't set attribute 'eos_token'

@hiyouga
Copy link
Owner

hiyouga commented Nov 20, 2023

#1307 (comment)

@hiyouga hiyouga added the duplicate This issue or pull request already exists label Nov 20, 2023
@hiyouga hiyouga closed this as completed Nov 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists
Projects
None yet
Development

No branches or pull requests

2 participants