ملخص
Transformer-based language models significantly advanced the state-of-the-art in many linguistic tasks. As this revolution continues, the ability to explain model predictions has become a major area of interest for the NLP community. In this work, we present Gradient Self-Attention Maps (Grad-SAM) - a novel gradient-based method that analyzes self-attention units and identifies the input elements that explain the model's prediction the best. Extensive evaluations on various benchmarks show that Grad-SAM obtains significant improvements over state-of-the-art alternatives.
اللغة الأصلية | الإنجليزيّة |
---|---|
عنوان منشور المضيف | CIKM 2021 - Proceedings of the 30th ACM International Conference on Information and Knowledge Management |
ناشر | Association for Computing Machinery |
الصفحات | 2882-2887 |
عدد الصفحات | 6 |
رقم المعيار الدولي للكتب (الإلكتروني) | 9781450384469 |
المعرِّفات الرقمية للأشياء | |
حالة النشر | نُشِر - 26 أكتوبر 2021 |
الحدث | 30th ACM International Conference on Information and Knowledge Management, CIKM 2021 - Virtual, Online, أستراليا المدة: ١ نوفمبر ٢٠٢١ → ٥ نوفمبر ٢٠٢١ |
سلسلة المنشورات
الاسم | International Conference on Information and Knowledge Management, Proceedings |
---|
!!Conference
!!Conference | 30th ACM International Conference on Information and Knowledge Management, CIKM 2021 |
---|---|
الدولة/الإقليم | أستراليا |
المدينة | Virtual, Online |
المدة | ١/١١/٢١ → ٥/١١/٢١ |
ملاحظة ببليوغرافية
Publisher Copyright:© 2021 ACM.