Our paper, "Toward Unifying Text Segmentation and Long Document Summarization", is accepted at EMNLP 2022. If you find the code useful, please cite the following paper.
@inproceedings{cho-etal-2022-toward,
title={Toward Unifying Text Segmentation and Long Document Summarization},
author={Sangwoo Cho, Kaiqiang Song, Xiaoyang Wang, Fei Liu, and Dong Yu},
booktitle={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing},
year={2022}
}
We propose a method learning robust sentence representations by performing summarization and segmentation simultaneously, which is further enhanced by an optimizationbased regularizer to promote selection of diverse summary sentences.
Lodoss-base
: This is a traditional extractive summarization method which predicts probabilities of each sentence being included in a summary.Lodoss-joint
: This model is based on joint prediction of summarization and segmentation of higher context (paragraph, section, etc.).Lodoss-full
: This approach is further regularized by the DPP loss to be able to predict better summary sentences.
Create the environment with conda and pip.
conda env create -f environment.yml
conda activate lodoss
We have done experiments with the environment with python 3.9 and cuda 11.3.
(For other CUDA version, please
install
the corresponding packages)
We provide the processed data: PubMed, arXiv, VT-SSum
- Original dataset can be downloaded from PubMed, arXiv, or VT-SSum
- Dataset format is based on Hugging Face Datasets.
- Keys for data instance
article_id
: strabstract_list
: List[str] / reference summarysection_list
: List[str] / document segmented by sectionssection_names
: List[str] / names for each sectionselected_ids
: List[int] / oracle summary ids
Please run the scripts under /run
. For example:
cd Lodoss
bash run/train_pubmed_16K_base.sh
You need to update --data_dir
with your data directory.
Some parameters to consider:
--max_input_len 16384
: change4096
to limit the input length--is_longer_seq
: this should be enabled for the sequence length longer than 4096--is_seg
: forLodoss-joint
model--is_dpp
: forLodoss-full
model, set--dpp_weight 0.1
as well--fp16
: use this forLodoss-base
andLodoss-joint
models / due to numerical stability issue of computing the DPP loss withLodoss-full
, it is not recommended to use this parameter (fp32)--num_sent_inf 7
: number of sentences to select for inference / use7
forPubMed
,5
forarXiv
(which is the average number of the reference summary for each dataset) for the best results
If you'd like to download our trained models on Pubmed and arXiv, they are available here:
Model | PubMed | arXiv |
---|---|---|
16K_Lodoss-base | ⬇️ | ⬇️ |
16K_Lodoss-joint | ⬇️ | ⬇️ |
16K_Lodoss-full | ⬇️ | ⬇️ |
16K_Lodoss-full-LG | ⬇️ | ⬇️ |
With your own trained model or our provided pretrined models above, you can run inference on PubMed or arXiv.
cd Lodoss
bash run/test_pubmed_16K_base.sh
Please refer to the scripts under /run
for running inference with other models.
You can also save the summary by enabling --save_summary
.
Copyright 2022 Tencent
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
This repo is only for research purpose. It is not an officially supported Tencent product.