Abstract
We present Learning Attributions (LA), a novel method for explaining language models. The core idea behind LA is to train a dedicated attribution model that functions as a surrogate explainer for the language model. This attribution model is designed to identify which tokens are most influential in driving the model's predictions. By optimizing the attribution model to mask the minimal amount of information necessary to induce substantial changes in the language model's output, LA provides a mechanism to understand which tokens in the input are critical for the model's decisions. We demonstrate the effectiveness of LA across several language models, highlighting its superiority over multiple state-of-the-art explanation methods across various datasets and evaluation metrics.
Original language | English |
---|---|
Title of host publication | CIKM 2024 - Proceedings of the 33rd ACM International Conference on Information and Knowledge Management |
Publisher | Association for Computing Machinery |
Pages | 98-108 |
Number of pages | 11 |
ISBN (Electronic) | 9798400704369 |
DOIs | |
State | Published - 21 Oct 2024 |
Event | 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 - Boise, United States Duration: 21 Oct 2024 → 25 Oct 2024 |
Publication series
Name | International Conference on Information and Knowledge Management, Proceedings |
---|---|
ISSN (Print) | 2155-0751 |
Conference
Conference | 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 |
---|---|
Country/Territory | United States |
City | Boise |
Period | 21/10/24 → 25/10/24 |
Bibliographical note
Publisher Copyright:© 2024 ACM.
Keywords
- deep learning
- explainable ai
- natural language processing