505

Unifying distillation and privileged information

Abstract

We describe generalized distillation, a framework to learn from multiple representations in a semisupervised fashion. We show that distillation (Hinton et al., 2015) and privileged information (Vapnik & Izmailov, 2015) are particular instances of generalized distillation, give insight about why and when generalized distillation works, and provide numerical simulations to assess its effectiveness.

View on arXiv
Comments on this paper