This is the source code repository for the paper Dialogue Discourse Parsing as Generation: a Sequence-to-Sequence LLM-based Approach (SIGDial 2024).
We used the linguistic-only STAC corpus and followed the separation of train, dev, test in Shi and Huang, A Deep Sequential Model for Discourse Parsing on Multi-Party Dialogues. In AAAI, 2019.
The latest available verison on the website is available here.
We share the dataset we used in data/stac/
.
Download from here. We use the original separation of train, dev, and test.
Download the dataset and place it in data/molweni/
.
Here is a step-by-step guide to fine-tune a T5 family model for discourse parsing:
$ source virtualenvname/bin/activate
$ cd Seq2Seq-DDP/
$ pip install -r requirements.txt
In dataprocess.py
: process the original stac/molweni dataset and convert the raw text to structured text.
Choose the structured text from: 'natural', 'augmented' (Seq2Seq-DDP) and 'focus', 'natural2' (Seq2Seq-DDP+transition).
Examples for each structure type are given in data/stac_{structure}_train.json
.
In train.py
: give "do_train" as argument.
This code fine-tunes a t5 familiy model for discourse parsing.
-
Seq2seq-DDP prediction: in
train.py
, give argument "do_test", choose structure type from 'augmented', 'natural'. Make sure to first put the fine-tuned model checkpoint inconstant.py
. Results will be written ingeneration/
. -
Seq2Seq-DDP+transition system prediction: in
transition_predict.py
: choose structure type from 'focus', 'natural2'. -
evaluate.py
: Evaluate predicted files ingeneration/
and calculate scores. -
constant.py
: store paths, labels, etc.
Soon