OLMo-Eval is a repository for evaluating open language models.
NOTE: This repository has been superceded by the OLMES repository, available at https://github.com/allenai/olmes (Open Language Model Evaluation System).
The olmo_eval
framework is a way to run evaluation pipelines for language models on NLP tasks.
The codebase is extensible and contains task_sets
and example configurations, which run a series
of tango
steps for computing the model outputs and metrics.
Using this pipeline, you can evaluate m models on t task_sets, where each task_set consists of one or more individual tasks.
Using task_sets allows you to compute aggregate metrics for multiple tasks. The optional google-sheet
integration can be used
for reporting.
The pipeline is built using ai2-tango and ai2-catwalk.
After cloning the repository, please run
conda create -n eval-pipeline python=3.10
conda activate eval-pipeline
cd OLMo-Eval
pip install -e .
The current task_sets
can be found at configs/task_sets. In this example, we run gen_tasks
on EleutherAI/pythia-1b
. The example config is here.
The configuration can be run as follows:
tango --settings tango.yml run configs/example_config.jsonnet --workspace my-eval-workspace
This executes all the steps defined in the config, and saves them in a local tango
workspace called my-eval-workspace
. If you add a new task_set or model to your config and run the same command again, it will reuse the previous outputs, and only compute the new outputs.
The output should look like this:
![Screen Shot 2023-12-04 at 9 22 35 PM](https://private-user-images.githubusercontent.com/6500683/287929381-14a74e61-75d8-470c-8bde-12e35c38c44a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk4NDEyNDIsIm5iZiI6MTczOTg0MDk0MiwicGF0aCI6Ii82NTAwNjgzLzI4NzkyOTM4MS0xNGE3NGU2MS03NWQ4LTQ3MGMtOGJkZS0xMmUzNWMzOGM0NGEucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIxOCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMThUMDEwOTAyWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9ODI4Nzg1ODA1OTI1ZTE2YjllYjhlMjhjNTkxOTI4ZGI3ODMwNThhY2Y4NzdjNzUzODVjY2Y2NGU4MTg3M2I3ZiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.q-vh7fosZYGYKYGFteKqbM5dxV7dCiWPMEahpYGW3xw)
New models and datasets can be added by modifying the example configuration.
from tango import Workspace
workspace = Workspace.from_url("local://my-eval-workspace")
result = workspace.step_result("combine-all-outputs")
Load individual task results with per instance outputs
result = workspace.step_result("outputs_pythia-1bstep140000_gen_tasks_drop")
The eval_table config evaluates falcon-7b
, mpt-7b
, llama2-7b
, and llama2-13b
, on standard_benchmarks
and MMLU
. Run as follows:
tango --settings tango.yml run configs/eval_table.jsonnet --workspace my-eval-workspace
This repository was also used to run evaluations for the PALOMA paper
Details on running the evaluation on PALOMA can be found here.