ملخص
We present MetricBERT, a BERT-based model that learns to embed text under a well-defined similarity metric while simultaneously adhering to the “traditional” masked-language task. We focus on downstream tasks of learning similarities for recommendations where we show that MetricBERT outperforms state-of-the-art alternatives, sometimes by a substantial margin. We conduct extensive evaluations of our method and its different variants, showing that our training objective is highly beneficial over a traditional contrastive loss, a standard cosine similarity objective, and six other baselines. As an additional contribution, we publish a dataset of video games descriptions along with a test set of similarity annotations crafted by a domain expert.
اللغة الأصلية | الإنجليزيّة |
---|---|
عنوان منشور المضيف | 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Proceedings |
ناشر | Institute of Electrical and Electronics Engineers Inc. |
الصفحات | 8142-8146 |
عدد الصفحات | 5 |
رقم المعيار الدولي للكتب (الإلكتروني) | 9781665405409 |
المعرِّفات الرقمية للأشياء | |
حالة النشر | نُشِر - 2022 |
الحدث | 47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Virtual, Online, سنغافورة المدة: ٢٣ مايو ٢٠٢٢ → ٢٧ مايو ٢٠٢٢ |
سلسلة المنشورات
الاسم | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
---|---|
مستوى الصوت | 2022-May |
رقم المعيار الدولي للدوريات (المطبوع) | 1520-6149 |
!!Conference
!!Conference | 47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 |
---|---|
الدولة/الإقليم | سنغافورة |
المدينة | Virtual, Online |
المدة | ٢٣/٠٥/٢٢ → ٢٧/٠٥/٢٢ |
ملاحظة ببليوغرافية
Publisher Copyright:© 2022 IEEE