Abstract
In the field of recommender systems, explainability remains a pivotal yet challenging aspect. To address this, we introduce the Learning to eXplain Recommendations (LXR) framework, a post-hoc, model-agnostic approach designed for providing counterfactual explanations. LXR is compatible with any differentiable recommender algorithm and scores the relevance of user data in relation to recommended items. A distinctive feature of LXR is its use of novel self-supervised counterfactual loss terms, which effectively highlight the most influential user data responsible for a specific recommended item. Additionally, we propose several innovative counterfactual evaluation metrics specifically tailored for assessing the quality of explanations in recommender systems. Our code is available on our GitHub repository: https://github.com/DeltaLabTLV/LXR.
| Original language | English |
|---|---|
| Title of host publication | WWW 2024 - Proceedings of the ACM Web Conference |
| Publisher | Association for Computing Machinery, Inc |
| Pages | 3723-3733 |
| Number of pages | 11 |
| ISBN (Electronic) | 9798400701719 |
| DOIs | |
| State | Published - 13 May 2024 |
| Event | 33rd ACM Web Conference, WWW 2024 - Singapore, Singapore Duration: 13 May 2024 → 17 May 2024 |
Publication series
| Name | WWW 2024 - Proceedings of the ACM Web Conference |
|---|
Conference
| Conference | 33rd ACM Web Conference, WWW 2024 |
|---|---|
| Country/Territory | Singapore |
| City | Singapore |
| Period | 13/05/24 → 17/05/24 |
Bibliographical note
Publisher Copyright:© 2024 Owner/Author.
Keywords
- attributions
- counterfactual explanations
- explainable ai
- explanation evaluation
- interpretability
- recommender systems