The experimental scripts used in our paper:
@inproceedings{kohita-etal-2020-ealm,
title = "Q-learning with Language Model for Edit-based Unsupervised Summarization",
author = "Ryosuke, Kohita and Akifumi, Wachi and Yang, Zhao and Ryuki Tachibana",
booktitle = "EMNLP",
year = "2020"
}
pip install -r requirements.txt
sh run_train.sh
This will save a model ./model
after reaching 1000 updates.
sh run_predict.sh ./model data/sample.txt