Skip to content
/ LAMP Public

[NAACL 2022 Findings]Specializing Pre-trained Language Models for Better Relational Reasoning via Network Pruning

Notifications You must be signed in to change notification settings

DRSY/LAMP

Repository files navigation

Exploring and Exploiting Latent Commonsense Knowledge in Pretrained Masked Language Models

Codebase for the paper "Exploring and Exploiting Latent Commonsense Knowledge in Pretrained Masked Language Models".

Note: under maintenance, will be complete soon.

Current supported models:

  • DistilBERT-base
  • BERT(base, large, etc.)
  • RoBERTa(base, large, etc.)
  • MPNet

Prepare the codebase

git clone https://github.com/DRSY/LAMP.git && cd LAMP
pip install -r requirements.txt

Run pruning and probing

Specify parameters about probing experiments in a separate params file, then run:

make -f Makefile probe

detailed hyperparameters can be found in probe.sh.

Run GLUE

Specify parameters about GLUE experiments in a separate params file, then run:

make -f Makefile glue

Clean the log files

make -f Makefile clean

detailed hyperparameters can be found in glue.sh.

About

[NAACL 2022 Findings]Specializing Pre-trained Language Models for Better Relational Reasoning via Network Pruning

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published