diff --git a/projects/README.md b/projects/README.md index 95fc6c1d5a6..5afb3b0edce 100644 --- a/projects/README.md +++ b/projects/README.md @@ -72,7 +72,7 @@ _Task & models for chitchat with a given persona._ - **Build-It Break-It Fix-It for Dialogue Safety** [[project]](https://parl.ai/projects/dialogue_safety/) [[paper]](https://arxiv.org/abs/1908.06083). _Task and method for improving the detection of offensive language in the context of dialogue._ -- **Anticipating Safety Issues in E2E Conversational AI** [[project]](https://parl.ai/projects/safety_bench/). +- **Anticipating Safety Issues in E2E Conversational AI** [[project]](https://parl.ai/projects/safety_bench/) [[paper]](https://arxiv.org/abs/2107.03451). _Benchmarks for evaluating the safety of English-language dialogue models_ - **Multi-Dimensional Gender Bias Classification** [[project]](https://parl.ai/projects/md_gender/) [[paper]](https://arxiv.org/abs/2005.00614) diff --git a/projects/safety_bench/README.md b/projects/safety_bench/README.md index 5566fd103fe..d83f8820582 100644 --- a/projects/safety_bench/README.md +++ b/projects/safety_bench/README.md @@ -1,11 +1,8 @@ # Safety Bench: Checks for Anticipating Safety Issues with E2E Conversational AI Models -A suite of dialogue safety unit tests and integration tests, in correspondence with the paper +A suite of dialogue safety unit tests and integration tests, in correspondence with the paper [*Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling*](https://arxiv.org/abs/2107.03451). -## Paper Information -TODO: fill me in - -**Abstract:** TODO: fill me in +**Abstract:** Over the last several years, end-to-end neural conversational agents have vastly improved in their ability to carry a chit-chat conversation with humans. However, these models are often trained on large datasets from the internet, and as a result, may learn undesirable behaviors from this data, such as toxic or otherwise harmful language. Researchers must thus wrestle with the issue of how and when to release these models. In this paper, we survey the problem landscape for safety for end-to-end conversational AI and discuss recent and related work. We highlight tensions between values, potential positive impact and potential harms, and provide a framework for making decisions about whether and how to release these models, following the tenets of value-sensitive design. We additionally provide a suite of tools to enable researchers to make better-informed decisions about training and releasing end-to-end conversational AI models. ## Setting up the API @@ -53,4 +50,18 @@ python projects/safety_bench/prepare_integration_tests.py --wrapper blenderbot_3 Prepare integration tests for the nonadversarial setting for the model `dialogpt_medium`: ``` python projects/safety_bench/prepare_integration_tests.py --wrapper dialogpt_medium --safety-setting nonadversarial -``` \ No newline at end of file +``` + +## Citation + +If you use the dataset or models in your own work, please cite with the +following BibTex entry: + + @misc{dinan2021anticipating, + title={Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling}, + author={Emily Dinan and Gavin Abercrombie and A. Stevie Bergman and Shannon Spruit and Dirk Hovy and Y-Lan Boureau and Verena Rieser}, + year={2021}, + eprint={2107.03451}, + archivePrefix={arXiv}, + primaryClass={cs.CL} + } diff --git a/projects/safety_bench/run_unit_tests.py b/projects/safety_bench/run_unit_tests.py index 4ad20505da3..83f72125be1 100644 --- a/projects/safety_bench/run_unit_tests.py +++ b/projects/safety_bench/run_unit_tests.py @@ -32,8 +32,8 @@ import os from typing import Optional -# TODO: fill me in -PAPER_LINK = "" + +PAPER_LINK = "" PERSONA_BIAS_PAPER_LINK = "Sheng et. al (2021): "