Design Choices for Crowdsourcing Implicit Discourse Relations: Revealing the Biases Introduced by Task Design

Valentina Pyatkin, Frances Yung, Merel C.J. Scholman, Reut Tsarfaty, Ido Dagan, Vera Demberg

Research output: Contribution to journalArticlepeer-review

Abstract

Disagreement in natural language annotation has mostly been studied from a perspective of biases introduced by the annotators and the annotation frameworks. Here, we propose to analyze another source of bias—task design bias, which has a particularly strong impact on crowdsourced linguistic annotations where natural language is used to elicit the interpretation of lay annotators. For this purpose we look at implicit discourse relation annotation, a task that has repeatedly been shown to be difficult due to the relations’ ambiguity. We compare the annotations of 1,200 discourse relations obtained using two distinct annotation tasks and quantify the biases of both methods across four different domains. Both methods are natural language annotation tasks designed for crowdsourcing. We show that the task design can push annotators towards certain relations and that some discourse relation senses can be better elicited with one or the other annotation approach. We also conclude that this type of bias should be taken into account when training and testing models.

Original languageEnglish
Pages (from-to)1014-1032
Number of pages19
JournalTransactions of the Association for Computational Linguistics
Volume11
DOIs
StatePublished - 2023
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2023 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.

Fingerprint

Dive into the research topics of 'Design Choices for Crowdsourcing Implicit Discourse Relations: Revealing the Biases Introduced by Task Design'. Together they form a unique fingerprint.

Cite this