Interpreting BERT-based Text Similarity via Activation and Saliency Maps

Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Jonathan Weill, Noam Koenigstein

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Recently, there has been growing interest in the ability of Transformer-based models to produce meaningful embeddings of text with several applications, such as text similarity. Despite significant progress in the field, the explanations for similarity predictions remain challenging, especially in unsupervised settings. In this work, we present an unsupervised technique for explaining paragraph similarities inferred by pre-trained BERT models. By looking at a pair of paragraphs, our technique identifies important words that dictate each paragraph's semantics, matches between the words in both paragraphs, and retrieves the most important pairs that explain the similarity between the two. The method, which has been assessed by extensive human evaluations and demonstrated on datasets comprising long and complex paragraphs, has shown great promise, providing accurate interpretations that correlate better with human perceptions.

Original languageEnglish
Title of host publicationWWW 2022 - Proceedings of the ACM Web Conference 2022
PublisherAssociation for Computing Machinery, Inc
Pages3259-3268
Number of pages10
ISBN (Electronic)9781450390965
DOIs
StatePublished - 25 Apr 2022
Event31st ACM World Wide Web Conference, WWW 2022 - Virtual, Online, France
Duration: 25 Apr 202229 Apr 2022

Publication series

NameWWW 2022 - Proceedings of the ACM Web Conference 2022

Conference

Conference31st ACM World Wide Web Conference, WWW 2022
Country/TerritoryFrance
CityVirtual, Online
Period25/04/2229/04/22

Bibliographical note

Funding Information:
The work is supported by the NSFC for Distinguished Young Scholar (61825602) and Tsinghua-Bosch Joint ML Center.

Publisher Copyright:
© 2022 ACM.

Keywords

  • Attention Models
  • Deep Learning
  • Explainable AI
  • Interpretability
  • Self-supervised
  • Transformers

Fingerprint

Dive into the research topics of 'Interpreting BERT-based Text Similarity via Activation and Saliency Maps'. Together they form a unique fingerprint.

Cite this