ملخص
We present Learning Attributions (LA), a novel method for explaining language models. The core idea behind LA is to train a dedicated attribution model that functions as a surrogate explainer for the language model. This attribution model is designed to identify which tokens are most influential in driving the model's predictions. By optimizing the attribution model to mask the minimal amount of information necessary to induce substantial changes in the language model's output, LA provides a mechanism to understand which tokens in the input are critical for the model's decisions. We demonstrate the effectiveness of LA across several language models, highlighting its superiority over multiple state-of-the-art explanation methods across various datasets and evaluation metrics.
اللغة الأصلية | الإنجليزيّة |
---|---|
عنوان منشور المضيف | CIKM 2024 - Proceedings of the 33rd ACM International Conference on Information and Knowledge Management |
ناشر | Association for Computing Machinery |
الصفحات | 98-108 |
عدد الصفحات | 11 |
رقم المعيار الدولي للكتب (الإلكتروني) | 9798400704369 |
المعرِّفات الرقمية للأشياء | |
حالة النشر | نُشِر - 21 أكتوبر 2024 |
الحدث | 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 - Boise, الولايات المتّحدة المدة: ٢١ أكتوبر ٢٠٢٤ → ٢٥ أكتوبر ٢٠٢٤ |
سلسلة المنشورات
الاسم | International Conference on Information and Knowledge Management, Proceedings |
---|---|
رقم المعيار الدولي للدوريات (المطبوع) | 2155-0751 |
!!Conference
!!Conference | 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 |
---|---|
الدولة/الإقليم | الولايات المتّحدة |
المدينة | Boise |
المدة | ٢١/١٠/٢٤ → ٢٥/١٠/٢٤ |
ملاحظة ببليوغرافية
Publisher Copyright:© 2024 ACM.