-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
运行模型问题 #23
Comments
请问成功了吗? 要求里面写着显卡内存要15G以上是一定得满足吗 我用轻薄本是不是跑不了了? |
那这样大模型训练的一般都是用什么跑出来的呢>? |
您好,Traceback能提供一下么 |
这个是终端运行截图
另外,如果我去掉所有的后缀,只输入python -m fastchat.serve.cli, 是可以下载模型的(大约12GB),想问一下这个是你们的MING模型吗,你们在github上展示的demo是怎么进入的呀?
附:设备:显卡一个3090 独显24GB
在 2024-05-07 14:03:42,"BlueZeros" ***@***.***> 写道:
ming 7b:python -m fastchat.serve.cli --model-path {path_to_checkpoint} \ # 模型路径 --conv_template bloom \ # prompt --max-new-token 512 \ # 输出最大长度 --beam-size 3 \ # beam search宽度 --temperature 1.2 # 采样温度 这里终端会报错
您好,Traceback能提供一下么
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
你好,我看不到您粘贴的截图,可以看下我创建的issues,看是否跟我是同样的问题呢?感谢 |
ming 7b:python -m fastchat.serve.cli
--model-path {path_to_checkpoint} \ # 模型路径
--conv_template bloom \ # prompt
--max-new-token 512 \ # 输出最大长度
--beam-size 3 \ # beam search宽度
--temperature 1.2 # 采样温度
这里终端会报错
The text was updated successfully, but these errors were encountered: