Pathologies in information bottleneck for deterministic supervised
learning
Information bottleneck (IB) is a method for extracting information from one random variable that is relevant for predicting another random variable . To do so, IB identifies an intermediate "bottleneck" variable that has low mutual information and high mutual information . The "IB curve" characterizes the set of bottleneck variables that achieve maximal for a given , and is typically explored by optimizing the "IB Lagrangian", . Recently, there has been interest in applying IB to supervised learning, particularly for classification problems that use neural networks. In most classification problems, the output class is a deterministic function of the input , which we refer to as "deterministic supervised learning". We demonstrate three pathologies that arise when IB is used in any scenario where is a deterministic function of : (1) the IB curve cannot be recovered by optimizing the IB Lagrangian for different values of ; (2) there are "uninteresting" solutions at all points of the IB curve; and (3) for classifiers that achieve low error rates, the activity of different hidden layers will not exhibit a strict trade-off between compression and prediction, contrary to a recent proposal. To address problem (1), we propose a functional that, unlike the IB Lagrangian, can recover the IB curve in all cases. We finish by demonstrating these issues on the MNIST dataset.
View on arXiv