LEEP: A New Measure to Evaluate Transferability of Learned Representations

  • Cuong V. Nguyen
  • , Tal Hassner
  • , Matthias Seeger
  • , Cedric Archambeau

Research output: Contribution to journalConference articlepeer-review

Abstract

We introduce a new measure to evaluate the transferability of representations learned by classifiers. Our measure, the Log Expected Empirical Prediction (LEEP), is simple and easy to compute: when given a classifier trained on a source data set, it only requires running the target data set through this classifier once. We analyze the properties of LEEP theoretically and demonstrate its effectiveness empirically. Our analysis shows that LEEP can predict the performance and convergence speed of both transfer and meta-transfer learning methods, even for small or imbalanced data. Moreover, LEEP outperforms recently proposed transferability measures such as negative conditional entropy and H scores. Notably, when transferring from ImageNet to CIFAR100, LEEP can achieve up to 30% improvement compared to the best competing method in terms of the correlations with actual transfer accuracy.

Original languageEnglish
Pages (from-to)7294-7305
Number of pages12
JournalProceedings of Machine Learning Research
Volume119
StatePublished - 2020
Externally publishedYes
Event37th International Conference on Machine Learning, ICML 2020 - Virtual, Online
Duration: 13 Jul 202018 Jul 2020

Bibliographical note

Publisher Copyright:
© 2020 by the author(s).

Fingerprint

Dive into the research topics of 'LEEP: A New Measure to Evaluate Transferability of Learned Representations'. Together they form a unique fingerprint.

Cite this