-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chatglm3 微调完成之后导出成功,但无法加载 #1307
Comments
导出后的目录是什么样的 |
|
看起来是 Windows 路径识别的问题 |
有辦法解決嗎 |
大佬在嗎, 我只能在windows下加載, 沒有其他辦法了 |
建议使用 WSL |
Still, even installed ubuntu on my PC
|
我用WSL也是有这个问题 |
+1,也遇到这个问题,导出后,在 ubuntu 22.04 无法使用模型 FlashAttention-2 is not installed, ignore this if you are not using FlashAttention.
10/31/2023 14:53:23 - WARNING - llmtuner.tuner.core.loader - Checkpoint is not found at evaluation, load the original model.
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/doucai/.vscode-server/extensions/ms-python.python-2023.18.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/home/doucai/.vscode-server/extensions/ms-python.python-2023.18.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/home/doucai/.vscode-server/extensions/ms-python.python-2023.18.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/home/doucai/.vscode-server/extensions/ms-python.python-2023.18.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home/doucai/.vscode-server/extensions/ms-python.python-2023.18.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home/doucai/.vscode-server/extensions/ms-python.python-2023.18.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/home/doucai/LLaMA-Factory/src/api_demo.py", line 14, in <module>
main()
File "/home/doucai/LLaMA-Factory/src/api_demo.py", line 7, in main
chat_model = ChatModel()
File "/home/doucai/LLaMA-Factory/src/llmtuner/chat/stream_chat.py", line 15, in __init__
self.model, self.tokenizer = load_model_and_tokenizer(model_args, finetuning_args)
File "/home/doucai/LLaMA-Factory/src/llmtuner/tuner/core/loader.py", line 71, in load_model_and_tokenizer
tokenizer = AutoTokenizer.from_pretrained(
File "/home/doucai/LLaMA-Factory/venv/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 738, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/doucai/LLaMA-Factory/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2017, in from_pretrained
return cls._from_pretrained(
File "/home/doucai/LLaMA-Factory/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2249, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/doucai/.cache/huggingface/modules/transformers_modules/xhs_merge/tokenization_chatglm.py", line 93, in __init__
super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, **kwargs)
File "/home/doucai/LLaMA-Factory/venv/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 363, in __init__
super().__init__(**kwargs)
File "/home/doucai/LLaMA-Factory/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1604, in __init__
super().__init__(**kwargs)
File "/home/doucai/LLaMA-Factory/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 861, in __init__
setattr(self, key, value)
AttributeError: can't set attribute 'eos_token' |
把源目录除了 bin 和 pytorch_model.bin.index.json 以外的文件全部复制到导出目录中覆盖 |
这里会导致报这个错误:THUDM/ChatGLM3#152 (comment) |
@ganting #1307 (comment) 请问你这个问题怎么解决的呢? |
@nansanhao |
@ghx2757 复制原模型的tokenizer_config.json文件到新模型下,覆盖新模型的这个文件 |
我按照你所说的方式进行了t |
我是全参数微调 |
好的,谢谢,我在看看吧.. |
你好,请问下你这问题解决了吗?我也是照着作者的办法解决了加载的问题,但微调的结果在推理过程中失效了,这和在web demo中运行的结果不一样,本项目的web demo中加载后推理是能呈现微调效果的 |
@migrant620 #1307 (comment) 还没...还在解决中 |
@migrant620 你需要加 system prompt LLaMA-Factory/src/llmtuner/data/template.py Lines 364 to 367 in 3378337
|
新手提问,这个加System Prompt是直接改这段代码,还是说要修改Dataset增加系统提示词字段 |
@hiyouga 谢谢老师,这样的话就意味着每次提问都需要加role=system的指令了对吧,实际我想达到的效果是不经过system而通过微调方式改变模型的自我认知,这在glm二代是可以的 |
@migrant620 你改的方法还是微调,我只是说让你在推理时候带上之前的 system prompt |
我改的是微调,比如我微调的语料就是为了改变他的自我认知。如果说推理时不带上system prompt的话,微调的结果现在看是没起作用 |
您好,请问您解决了吗?我也遇到了这问题 |
我也是同样的问题 |
我是精调 |
出现notImplementdError |
您好,请问这个问题有解决吗? |
不报错了,调完的没效果了,自我认知的 |
@hiyouga 你好,将其他文件覆盖后导致微调没有效果,请问应该如何解决? |
同样的问题。property 'eos_token' of 'ChatGLMTokenizer' |
The text was updated successfully, but these errors were encountered: