FaiRecSys: mitigating algorithmic bias in recommender systems

Bora Edizel, Francesco Bonchi, Sara Hajian, André Panisson, Tamir Tassa

Research output: Contribution to journalArticlepeer-review

Abstract

Recommendation and personalization are useful technologies which influence more and more our daily decisions. However, as we show empirically in this paper, the bias that exists in the real world and which is reflected in the training data can be modeled and amplified by recommender systems and in the end returned as biased recommendations to the users. This feedback process creates a self-perpetuating loop which progressively strengthens the filter bubbles we live in. Biased recommendations can also reinforce stereotypes such as those based on gender or ethnicity, possibly resulting in disparate impact. In this paper we address the problem of algorithmic bias in recommender systems. In particular, we highlight the connection between predictability of sensitive features and bias in the results of recommendations and we then offer a theoretically founded bound on recommendation bias based on that connection. We continue to formalize a fairness constraint and the price that one has to pay, in terms of alterations in the recommendation matrix, in order to achieve fair recommendations. Finally, we propose FaiRecSys—an algorithm that mitigates algorithmic bias by post-processing the recommendation matrix with minimum impact on the utility of recommendations provided to the end-users.

Original languageEnglish
Pages (from-to)197-213
Number of pages17
JournalInternational Journal of Data Science and Analytics
Volume9
Issue number2
DOIs
StatePublished - 1 Mar 2020

Bibliographical note

Publisher Copyright:
© 2019, Springer Nature Switzerland AG.

Keywords

  • Algorithmic bias
  • Fairness
  • Privacy
  • Recommender systems

Fingerprint

Dive into the research topics of 'FaiRecSys: mitigating algorithmic bias in recommender systems'. Together they form a unique fingerprint.

Cite this