Skip to content

Commit

Permalink
add a note for missing dependencies (EleutherAI#2336)
Browse files Browse the repository at this point in the history
  • Loading branch information
eldarkurtic authored and jmercat committed Sep 25, 2024
1 parent 82c81e3 commit 5085bdb
Showing 1 changed file with 9 additions and 0 deletions.
9 changes: 9 additions & 0 deletions lm_eval/tasks/leaderboard/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,15 @@ As we want to evaluate models across capabilities, the list currently contains:

Details on the choice of those evals can be found [here](https://huggingface.co/spaces/open-llm-leaderboard/blog) !

## Install
To install the `lm-eval` package with support for leaderboard evaluations, run:

```bash
git clone --depth 1 https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e ".[math,ifeval,sentencepiece]"
```

## BigBenchHard (BBH)

A suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH).
Expand Down

0 comments on commit 5085bdb

Please sign in to comment.