Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

看不懂怎么用。。eval_llama.py是给基于llama的模型用的吗,有很多报错不知道怎么解决 #49

Open
starevelyn opened this issue Aug 11, 2023 · 3 comments

Comments

@starevelyn
Copy link

要是想评测用自己数据训练过的llama-7b模型,命令行是
torchrun --nproc_per_node 8
code/evaluator_series/eval_llama.py
--ckpt_dir [PATH TO CKPT]
--param_size 7
--few_shot
--cot
--ntrain 5
--subject [SUBJECT NAME]

然后报错:
image
是要在命令行参数里面加local_rank, world_size这些吗

@starevelyn
Copy link
Author

有没有完整示例教程之类的。。

@yuanzhiyong1999
Copy link

请问找到解决方法了吗

@entropy2333
Copy link

应该使用这个库进行评测:https://github.com/EleutherAI/lm-evaluation-harness

git clone https://github.com/EleutherAI/lm-evaluation-harness.git
cd lm-evaluation-harness/
pip install -e .

评测脚本:

python main.py --model hf-causal-experimental \
    --model_args pretrained=/path/to/model \
    --tasks Ceval-valid-computer_network \
    --device cuda:0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants