We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
要是想评测用自己数据训练过的llama-7b模型,命令行是 torchrun --nproc_per_node 8 code/evaluator_series/eval_llama.py --ckpt_dir [PATH TO CKPT] --param_size 7 --few_shot --cot --ntrain 5 --subject [SUBJECT NAME]
然后报错: 是要在命令行参数里面加local_rank, world_size这些吗
The text was updated successfully, but these errors were encountered:
有没有完整示例教程之类的。。
Sorry, something went wrong.
请问找到解决方法了吗
应该使用这个库进行评测:https://github.com/EleutherAI/lm-evaluation-harness
git clone https://github.com/EleutherAI/lm-evaluation-harness.git cd lm-evaluation-harness/ pip install -e .
评测脚本:
python main.py --model hf-causal-experimental \ --model_args pretrained=/path/to/model \ --tasks Ceval-valid-computer_network \ --device cuda:0
No branches or pull requests
要是想评测用自己数据训练过的llama-7b模型,命令行是
torchrun --nproc_per_node 8
code/evaluator_series/eval_llama.py
--ckpt_dir [PATH TO CKPT]
--param_size 7
--few_shot
--cot
--ntrain 5
--subject [SUBJECT NAME]
然后报错:
是要在命令行参数里面加local_rank, world_size这些吗
The text was updated successfully, but these errors were encountered: