The ability to identify and control different kinds of linguistic information encoded in vector representations of words has many use cases, especially for explainability and bias removal. This is usually done via a set of simple classification tasks, termed probes, to evaluate the information encoded in the embedding space. However, the involvement of a trainable classifier leads to entanglement between the probe's results and the classifier's nature. As a result, contemporary works on probing include tasks that do not involve training of auxiliary models. In this work we introduce the term indicator tasks for non-trainable tasks which are used to query embedding spaces for the existence of certain properties, and claim that this kind of tasks may point to a direction opposite to probes, and that this contradiction complicates the decision on whether a property exists in an embedding space. We demonstrate our claims with two test cases, one dealing with gender debiasing and another with the erasure of morphological information from embedding spaces. We show that the application of a suitable indicator provides a more accurate picture of the information captured and removed compared to probes. We thus conclude that indicator tasks should be implemented and taken into consideration when eliciting information from embedded representations.
|Title of host publication
|Findings of the Association for Computational Linguistics
|Subtitle of host publication
|Association for Computational Linguistics (ACL)
|Number of pages
|Published - 2023
|2023 Findings of the Association for Computational Linguistics: EMNLP 2023 - Singapore, Singapore
Duration: 6 Dec 2023 → 10 Dec 2023
|Findings of the Association for Computational Linguistics: EMNLP 2023
|2023 Findings of the Association for Computational Linguistics: EMNLP 2023
|6/12/23 → 10/12/23
Bibliographical notePublisher Copyright:
© 2023 Association for Computational Linguistics.