This repository contains code to run the Plug and Play Language Model (PPLM), as described in this blog post and arXiv paper. A demo and Colab notebook are also available.
PPLM is also integrated into the 🤗/Transformers repository.
Authors: Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu
PPLM allows a user to flexibly plug in one or more tiny attribute models representing the desired steering objective into a large, unconditional language model (LM). The method has the key property that it uses the LM as is—no training or fine-tuning is required—which enables researchers to leverage best-in-class LMs even if they do not have the extensive hardware required to train them.
See also our arXiv paper, blog post, and try it out for yourself with no setup using the Colab notebook.
pip install -r requirements.txt
@article{dathathri2019plug,
title={Plug and Play Language Models: a Simple Approach to Controlled Text Generation},
author={Sumanth Dathathri and Andrea Madotto and Janice Lan and Jane Hung and Eric Frank and Piero Molino and Jason Yosinski and Rosanne Liu},
journal={arXiv preprint arXiv:1912.02164},
year={2019},
}
python run_pplm.py -B military --cond_text "The potato" --length 50 --gamma 1.5 --num_iterations 3 --num_samples 10 --stepsize 0.03 --window_length 5 --kl_scale 0.01 --gm_scale 0.99 --colorama --sample
-
Increase
--stepsize
to intensify topic control, and decrease its value to soften the control.--stepsize 0
recovers the original uncontrolled GPT-2 model. -
If the language being generated is repetitive (For e.g. "science science experiment experiment"), there are several options to consider:
a) Reduce the--stepsize
b) Increase--kl_scale
(the KL-loss coefficient) or decrease--gm_scale
(the gm-scaling term)
c) Add--grad-length xx
where xx is an (integer <= length, e.g.--grad-length 30
).
python run_pplm.py -D sentiment --class_label 2 --cond_text "My dog died" --length 50 --gamma 1.0 --num_iterations 10 --num_samples 10 --stepsize 0.04 --kl_scale 0.01 --gm_scale 0.95 --sample
-
Increase
--stepsize
to intensify topic control, and decrease its value to soften the control.--stepsize 0
recovers the original uncontrolled GPT-2 model. -
Use
--class_label 3
for negative, and--class_label 2
for positive