This paper presents a comprehensive exploration of fine-tuning a RoBERTa encoder-decoder model for abstractive text summarization. The approach involves using a pre-trained RoBERTa model on a vast text corpus and fine-tuning the encoder-decoder architecture on a summarization dataset. Extensive experiments on benchmark datasets demonstrate that the fine-tuned RoBERTa encoder-decoder has performances that are comparable with the existing methods in terms of summarization quality. The study also delves into the impact of data size, domain-specific fine-tuning, and transfer learning, highlighting the adaptability of RoBERTa-based models for generating coherent and informative summaries across diverse domains, contributing to the field of abstractive summarization research.
The paper can be found as paper_abstractive_summarization.pdf
.