Abstract
We present MetricBERT, a BERT-based model that learns to embed text under a well-defined similarity metric while simultaneously adhering to the “traditional” masked-language task. We focus on downstream tasks of learning similarities for recommendations where we show that MetricBERT outperforms state-of-the-art alternatives, sometimes by a substantial margin. We conduct extensive evaluations of our method and its different variants, showing that our training objective is highly beneficial over a traditional contrastive loss, a standard cosine similarity objective, and six other baselines. As an additional contribution, we publish a dataset of video games descriptions along with a test set of similarity annotations crafted by a domain expert.
| Original language | English |
|---|---|
| Title of host publication | 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Proceedings |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Pages | 8142-8146 |
| Number of pages | 5 |
| ISBN (Electronic) | 9781665405409 |
| DOIs | |
| State | Published - 2022 |
| Event | 2022 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2022 - Hybrid, Singapore Duration: 22 May 2022 → 27 May 2022 |
Publication series
| Name | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
|---|---|
| Volume | 2022-May |
| ISSN (Print) | 1520-6149 |
Conference
| Conference | 2022 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2022 |
|---|---|
| Country/Territory | Singapore |
| City | Hybrid |
| Period | 22/05/22 → 27/05/22 |
Bibliographical note
Publisher Copyright:© 2022 IEEE