ملخص
We introduce a new measure to evaluate the transferability of representations learned by classifiers. Our measure, the Log Expected Empirical Prediction (LEEP), is simple and easy to compute: when given a classifier trained on a source data set, it only requires running the target data set through this classifier once. We analyze the properties of LEEP theoretically and demonstrate its effectiveness empirically. Our analysis shows that LEEP can predict the performance and convergence speed of both transfer and meta-transfer learning methods, even for small or imbalanced data. Moreover, LEEP outperforms recently proposed transferability measures such as negative conditional entropy and H scores. Notably, when transferring from ImageNet to CIFAR100, LEEP can achieve up to 30% improvement compared to the best competing method in terms of the correlations with actual transfer accuracy.
اللغة الأصلية | الإنجليزيّة |
---|---|
عنوان منشور المضيف | 37th International Conference on Machine Learning, ICML 2020 |
المحررون | Hal Daume, Aarti Singh |
ناشر | International Machine Learning Society (IMLS) |
الصفحات | 7250-7261 |
عدد الصفحات | 12 |
رقم المعيار الدولي للكتب (الإلكتروني) | 9781713821120 |
حالة النشر | نُشِر - 2020 |
منشور خارجيًا | نعم |
الحدث | 37th International Conference on Machine Learning, ICML 2020 - Virtual, Online المدة: ١٣ يوليو ٢٠٢٠ → ١٨ يوليو ٢٠٢٠ |
سلسلة المنشورات
الاسم | 37th International Conference on Machine Learning, ICML 2020 |
---|---|
مستوى الصوت | PartF168147-10 |
!!Conference
!!Conference | 37th International Conference on Machine Learning, ICML 2020 |
---|---|
المدينة | Virtual, Online |
المدة | ١٣/٠٧/٢٠ → ١٨/٠٧/٢٠ |
ملاحظة ببليوغرافية
Publisher Copyright:© 2020 37th International Conference on Machine Learning, ICML 2020. All rights reserved.