Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

使用 ADAPTER_MODEL_PATH 加载 QLoRA 微调的 ChatGLM3 模型失败 #200

Closed
2 tasks done
Yuanye-F opened this issue Dec 13, 2023 · 2 comments
Closed
2 tasks done

Comments

@Yuanye-F
Copy link

Yuanye-F commented Dec 13, 2023

提交前必须检查以下项目 | The following items must be checked before submission

  • 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。 | Make sure you are using the latest code from the repository (git pull), some issues have already been addressed and fixed.
  • 我已阅读项目文档FAQ章节并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案 | I have searched the existing issues / discussions

问题类型 | Type of problem

模型推理和部署 | Model inference and deployment

操作系统 | Operating system

Linux

详细描述问题 | Detailed description of the problem

.env 如下

# 启动端口
PORT=8051

# model 命名
MODEL_NAME=chatglm3
# 将MODEL_PATH改为我们的chatglm3模型所在的文件夹路径
MODEL_PATH=/Algorithm/LLM/ChatGLM3/weights/chatglm3-6b
ADAPTER_MODEL_PATH=/Algorithm/LLM/LLaMA-Factory/saves/ChatGLM3-6B-Chat/lora/self_cognition
# PROMPT_NAME=chatglm3

# device related
# GPU设备并行化策略
# DEVICE_MAP=auto
# GPU数量
NUM_GPUs=1
# GPU序号
GPUS='1'

# vllm related
# 开启半精度,可以加快运行速度、减少GPU占用
DTYPE=half

# api related
# API前缀
API_PREFIX=/v1

# API_KEY,此处随意填一个字符串即可
OPENAI_API_KEY='EMPTY'

Dependencies

peft                          0.6.2
sentence-transformers         2.2.2
torch                         2.0.1
torchvision                   0.15.2
transformers                  4.33.2
transformers-stream-generator 0.0.4

运行日志或截图 | Runtime logs or screenshots

Traceback (most recent call last):
  File "/Algorithm/LLM/Baichuan2/api-for-open-llm/server.py", line 2, in <module>
    from api.models import app, EMBEDDED_MODEL, GENERATE_ENGINE
  File "/Algorithm/LLM/Baichuan2/api-for-open-llm/api/models.py", line 142, in <module>
    GENERATE_ENGINE = create_generate_model()
  File "/Algorithm/LLM/Baichuan2/api-for-open-llm/api/models.py", line 48, in create_generate_model
    model, tokenizer = load_model(
  File "/Algorithm/LLM/Baichuan2/api-for-open-llm/api/adapter/model.py", line 316, in load_model
    model, tokenizer = adapter.load_model(
  File "/Algorithm/LLM/Baichuan2/api-for-open-llm/api/adapter/model.py", line 69, in load_model
    tokenizer = self.tokenizer_class.from_pretrained(
  File "/home/zp/.conda/envs/baichuan2/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 723, in from_pretrained
    return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
  File "/home/zp/.conda/envs/baichuan2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1854, in from_pretrained
    return cls._from_pretrained(
  File "/home/zp/.conda/envs/baichuan2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2017, in _from_pretrained
    tokenizer = cls(*init_inputs, **init_kwargs)
  File "/home/zp/.cache/huggingface/modules/transformers_modules/self_cognition_gy_train_2023-12-13-10-24-44/tokenization_chatglm.py", line 93, in __init__
    super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, **kwargs)
  File "/home/zp/.conda/envs/baichuan2/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 347, in __init__
    super().__init__(**kwargs)
  File "/home/zp/.conda/envs/baichuan2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1561, in __init__
    super().__init__(**kwargs)
  File "/home/zp/.conda/envs/baichuan2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 847, in __init__
    setattr(self, key, value)
AttributeError: can't set attribute 'eos_token'
@xusenlinzy
Copy link
Owner

可能是transformers的版本问题

@Yuanye-F
Copy link
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants