We study the problem of recognizing visual entities from the textual descriptions of their classes. Specifically, given birds’ images with free-text descriptions of their species, we learn to classify images of previously-unseen species based on specie descriptions. This setup has been studied in the vision community under the name zero-shot learning from text, focusing on learning to transfer knowledge about visual aspects of birds from seen classes to previously-unseen ones. Here, we suggest focusing on the textual description and distilling from the description the most relevant information to effectively match visual features to the parts of the text that discuss them. Specifically, (1) we propose to leverage the similarity between species, reflected in the similarity between text descriptions of the species. (2) we derive visual summaries of the texts, i.e., extractive summaries that focus on the visual features that tend to be reflected in images. We propose a simple attention-based model augmented with the similarity and visual summaries components. Our empirical results consistently and significantly outperform the state-of-the-art on the largest benchmarks for text-based zero-shot learning, illustrating the critical importance of texts for zero-shot image-recognition.
|Title of host publication||Findings of the Association for Computational Linguistics Findings of ACL|
|Subtitle of host publication||EMNLP 2020|
|Publisher||Association for Computational Linguistics (ACL)|
|Number of pages||11|
|State||Published - 2020|
|Event||Findings of the Association for Computational Linguistics, ACL 2020: EMNLP 2020 - Virtual, Online|
Duration: 16 Nov 2020 → 20 Nov 2020
|Name||Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020|
|Conference||Findings of the Association for Computational Linguistics, ACL 2020: EMNLP 2020|
|Period||16/11/20 → 20/11/20|
Bibliographical noteFunding Information:
The research of the first and last author is funded by the European Research Council (ERC grant #677352) and the Israel Science Foundation (ISF grant #1739/26), and the research of the third author is also funded by the Israel Science Foundation (ISF grant #737/2018), for which we are grateful.
© 2020 Association for Computational Linguistics