Skip to content

Latest commit

 

History

History
13 lines (9 loc) · 363 Bytes

README.md

File metadata and controls

13 lines (9 loc) · 363 Bytes

RegMix Evaluation

Using lm-eval-harness

First you should install the lm-eval package from the github repository, run:

git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .

Then you can use the eval_lm_eval_harness.sh to evaluate your model. Please remember to modify model_args in the script.