Skip to content

Commit

Permalink
add a note for missing dependencies (EleutherAI#2336)
Browse files Browse the repository at this point in the history
  • Loading branch information
eldarkurtic authored Sep 24, 2024
1 parent d7734d1 commit bc50a9a
Showing 1 changed file with 9 additions and 0 deletions.
9 changes: 9 additions & 0 deletions lm_eval/tasks/leaderboard/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,15 @@ As we want to evaluate models across capabilities, the list currently contains:

Details on the choice of those evals can be found [here](https://huggingface.co/spaces/open-llm-leaderboard/blog) !

## Install
To install the `lm-eval` package with support for leaderboard evaluations, run:

```bash
git clone --depth 1 https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e ".[math,ifeval,sentencepiece]"
```

## BigBenchHard (BBH)

A suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH).
Expand Down

0 comments on commit bc50a9a

Please sign in to comment.