Skip to content

Efficiently find the best-suited language model (LM) for your NLP task

License

Notifications You must be signed in to change notification settings

flairNLP/transformer-ranker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A very simple library that helps you find the best-suited language model for your NLP task. Developed at Humboldt University of Berlin.

PyPi version python Static Badge


The problem: There are too many pre-trained language models (LMs) out there. But which one of them is best for your NLP classification task? Since fine-tuning LMs is costly, it is not possible to try them all!

The solution: Tranferability estimation with TransformerRanker!


TransformerRanker is a library that

  • quickly finds the best-suited language model for a given NLP classification task. You only need to select a dataset and a list of pre-trained language models (LMs) from the 🤗 HuggingFace Hub. TransformerRanker will then use state-of-the-art methods for transferability estimation (Garbas et al., 2024) to quickly and effectively estimate which of these LMs will perform best on the given task!

  • efficiently performs layerwise analysis of LMs. Transformer LMs have many layers. In some use cases, you might want to know which intermediate layer is best-suited for a downstream task. TransformerRanker allows you to quickly perform a layerwise analysis to identify the best layers in a model.


Quick Start

To install from pip, simply do:

pip install transformer-ranker

Example 1: Find the best LM for Named Entity Recognition

We start with a simple example in which we want to find the best LM for English Named Entity Recognition (NER) on the popular CoNLL-03 dataset.

To keep this example simple, we use TransformerRanker to choose between two models: bert-base-cased and bert-base-uncased.

The full snippet to do so is as follows:

from datasets import load_dataset
from transformer_ranker import TransformerRanker

# Step 1: Load the CoNLL-03 dataset from HuggingFace
dataset = load_dataset('conll2003')

# Step 2: Define the LMs to choose from 
language_models = ["bert-base-cased", "bert-base-uncased"]

# Step 3: Initialize the ranker with the dataset 
ranker = TransformerRanker(dataset, dataset_downsample=0.2)

# ... and run the ranker to obtain the ranking
results = ranker.run(language_models, batch_size=64)

If you run this snippet the first time, it will first download the CoNLL-03 dataset from HuggingFace, and also download the two transformer LMs. It will then conduct the estimation for the two LMs. On a GPU-enabled Google Colap notebook, this should only take a minute or two.

Print the results by doing

print(results)

This should print:

Rank 1. bert-base-uncased: 2.5935
Rank 2. bert-base-cased: 2.5137

This indicates that the uncased variant of BERT is likely to perform better on CoNLL-03!

Example 2: Really find the best LM (by analysing many LMs)

The first example only chooses between two LMs. But in practical use cases, you might want to choose between dozens of LMs.

To help you get started, we compiled two lists of popular LMs. (1) A 'base' list of that contains 17 popular models of medium size. (2) A 'large' list that contains popular models of larger size. These lists contain models that in our opinion are good LMs to try.

To find the best LM for English NER among 17 base LMs, use the following snippet:

from datasets import load_dataset
from transformer_ranker import TransformerRanker

# Step 1: Load the CoNLL-03 dataset from HuggingFace
dataset = load_dataset('conll2003')

# Step 2: Use our list of 'base' LMs as candidates to rank 
from transformer_ranker import prepare_popular_models
language_models = prepare_popular_models('base')

# Step 3: Initialize the ranker with the dataset 
ranker = TransformerRanker(dataset, dataset_downsample=0.2)

# ... and run the ranker to obtain the ranking
results = ranker.run(language_models, batch_size=64)

# print the ranking
print(results)

Done! This will print:

Rank 1. microsoft/deberta-v3-base: 2.6739
Rank 2. google/electra-base-discriminator: 2.6115
Rank 3. microsoft/mdeberta-v3-base: 2.6099
Rank 4. roberta-base: 2.5919
Rank 5. typeform/distilroberta-base-v2: 2.5834
Rank 6. sentence-transformers/all-mpnet-base-v2: 2.5709
Rank 7. bert-base-cased: 2.5137
Rank 8. FacebookAI/xlm-roberta-base: 2.4894
Rank 9. Twitter/twhin-bert-base: 2.4261
Rank 10. german-nlp-group/electra-base-german-uncased: 2.2517
Rank 11. distilbert-base-cased: 2.1989
Rank 12. sentence-transformers/all-MiniLM-L12-v2: 2.1957
Rank 13. Lianglab/PharmBERT-cased: 2.1945
Rank 14. google/electra-small-discriminator: 1.945
Rank 15. KISTI-AI/scideberta: 1.9175
Rank 16. SpanBERT/spanbert-base-cased: 1.7301
Rank 17. dmis-lab/biobert-base-cased-v1.2: 1.5784

This ranking gives you an indication which models might perform best on CoNLL-03. Accordingly, you can exclude the lower-ranked models and focus on the top-ranked models.

Note: Doing estimation for all 17 base models will take about 15 minutes on a GPU-enabled Colab Notebook (most time is spent downloading the models if you don't already have them locally).

License

MIT

About

Efficiently find the best-suited language model (LM) for your NLP task

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages