تخطي إلى التنقل الرئيسي تخطي إلى البحث تخطي إلى المحتوى الرئيسي

Evaluating models’ local decision boundaries via contrast sets

  • Matt Gardner
  • , Yoav Artzi
  • , Victoria Basmova
  • , Jonathan Berant
  • , Ben Bogin
  • , Sihao Chen
  • , Pradeep Dasigi
  • , Dheeru Dua
  • , Yanai Elazar
  • , Ananth Gottumukkala
  • , Nitish Gupta
  • , Hanna Hajishirzi
  • , Gabriel Ilharco
  • , Daniel Khashabi
  • , Kevin Lin
  • , Jiangming Liu
  • , Nelson F. Liu
  • , Phoebe Mulcaire
  • , Qiang Ning
  • , Sameer Singh
  • Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, Ben Zhou

نتاج البحث: فصل من :كتاب / تقرير / مؤتمرمنشور من مؤتمرمراجعة النظراء

ملخص

Standard test sets for supervised learning evaluate in-distribution generalization. Unfortunately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture the abilities a dataset is intended to test. We propose a more rigorous annotation paradigm for NLP that helps to close systematic gaps in the test data. In particular, after a dataset is constructed, we recommend that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Contrast sets provide a local view of a model’s decision boundary, which can be used to more accurately evaluate a model’s true linguistic capabilities. We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets (e.g., DROP reading comprehension, UD parsing, and IMDb sentiment analysis). Although our contrast sets are not explicitly adversarial, model performance is significantly lower on them than on the original test sets—up to 25% in some cases. We release our contrast sets as new evaluation benchmarks and encourage future dataset construction efforts to follow similar annotation processes.

اللغة الأصليةالإنجليزيّة
عنوان منشور المضيفFindings of the Association for Computational Linguistics Findings of ACL
العنوان الفرعي لمنشور المضيفEMNLP 2020
ناشرAssociation for Computational Linguistics (ACL)
الصفحات1307-1323
عدد الصفحات17
رقم المعيار الدولي للكتب (الإلكتروني)9781952148903
حالة النشرنُشِر - 2020
منشور خارجيًانعم
الحدثFindings of the Association for Computational Linguistics, ACL 2020: EMNLP 2020 - Virtual, Online
المدة: ١٦ نوفمبر ٢٠٢٠٢٠ نوفمبر ٢٠٢٠

سلسلة المنشورات

الاسمFindings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020

!!Conference

!!ConferenceFindings of the Association for Computational Linguistics, ACL 2020: EMNLP 2020
المدينةVirtual, Online
المدة١٦/١١/٢٠٢٠/١١/٢٠

ملاحظة ببليوغرافية

Publisher Copyright:
©2020 Association for Computational Linguistics

بصمة

أدرس بدقة موضوعات البحث “Evaluating models’ local decision boundaries via contrast sets'. فهما يشكلان معًا بصمة فريدة.

قم بذكر هذا