This is the public repository for data related to the single-turn debate project. It contains the arguments and text snippets selected by writers, as well as the judgements provided by workers on MTurk.
Alicia Parrish, Harsh Trivedi, Ethan Perez, Angelica Chen, Nikita Nangia, Jason Phang, Samuel R. Bowman
argument_judging_data
- Description: human jugments associated with the arguments & text snippets
- Contents: 2 sub-folders, each with a csv with the following structure:
passage_id
- Unique identifier for the passage, matches the passage id for the data from the QuALITY datasetquestion_id
- Unique identifier for the question, matches the question_id in the argument writing data but does not match the question ids from the QuALTIY dataset.question_id
values that start withct_
indicate catch trialsmode
- The experimental condition the item was shown in, eitherp
(passage-only),ps
(passage+snippet), orpsa
(passage+snippet+argument)chosen_answer_text
- The text of the answer choice chosen by that workerchoice_number
- Value of 1 or 2 indicating whether the text chosen corresponds toans1
orans2
ans1_text
&ans2_text
- The text displayed forans1
andans2
ans1_snippets
&ans2_snippets
- The text snippets displayed forans1
andans2
ans1_arg
&ans2_arg
- TRUE/FALSE value indicating ifans1
orans2
was correctans1
&ans2
- The text of the answer option forans1
andans2
corr
- 1/2 value indicating ifans1
orans2
is the correct answerquestion_text
- The text of the questionanonid
- Unique identifier for each MTurk workerhit_start_timestamp
&timer_start_timestamp
&hit_end_timestamp
- timestamps for the task indicating when the worker started the HIT, started the timer within the HIT (revealing the passage and any additional information), and when the worker submitted the HITround
- Which round of data collection the example was shown duringpart
- (only for the pilot data) Which half of the round the example was shown duringlime_limit
- (only for the pilot data) The max time limit allowed to the worker for that example. This value was always 90s in the main task, but varied between 60s, 90s, and 120s during the pilot
argument_writing_data
- Description: text arguments and selected snippets
- Contents: 1 .jsonl file with the following structure:
hit_id
&assignment_id
- Unique identifiers for the writing assignmentworker_id
- Unique identifier for each writersubmit_timestamp
- Time the writing task was submittedoutput_data
- a dictionary with the following contets:passage_id
- Unique identifier for the passage, matches the passage id for the data from the QuALITY dataset (available at [https://github.com/nyu-mll/quality])question_id
- Unique identifier for the question, does not match the question ids from the QuALTIY datasetquestion_text
- The text of the questionargue_for
- The answer option the writer was assigned to argue forargue_against
- The answer option the writer was assigned to argue againstargue_for_id
&argue_against_id
- 0 or 1, indicating the original order of the answer optionsselected_snippets
- A list of 1-3 snippets of text selected by the writer to support their argumentargue_for_correct
- TRUE/FALSE value indicating whether the writer was assigned to write for the correct answer in this questionargument
- The argument written in support of the assigned answer option