11
16

Pareto-optimal data compression for binary classification tasks

Abstract

The goal of lossy data compression is to reduce the storage cost of a data set XX while retaining as much information as possible about something (YY) that you care about. For example, what aspects of an image XX contain the most information about whether it depicts a cat? Mathematically, this corresponds to finding a mapping XZf(X)X\to Z\equiv f(X) that maximizes the mutual information I(Z,Y)I(Z,Y) while the entropy H(Z)H(Z) is kept below some fixed threshold. We present a method for mapping out the Pareto frontier for classification tasks, reflecting the tradeoff between retained entropy and class information. We first show how a random variable XX (an image, say) drawn from a class Y{1,...,n}Y\in\{1,...,n\} can be distilled into a vector W=f(X)Rn1W=f(X)\in \mathbb{R}^{n-1} losslessly, so that I(W,Y)=I(X,Y)I(W,Y)=I(X,Y); for example, for a binary classification task of cats and dogs, each image XX is mapped into a single real number WW retaining all information that helps distinguish cats from dogs. For the n=2n=2 case of binary classification, we then show how WW can be further compressed into a discrete variable Z=gβ(W){1,...,mβ}Z=g_\beta(W)\in\{1,...,m_\beta\} by binning WW into mβm_\beta bins, in such a way that varying the parameter β\beta sweeps out the full Pareto frontier, solving a generalization of the Discrete Information Bottleneck (DIB) problem. We argue that the most interesting points on this frontier are "corners" maximizing I(Z,Y)I(Z,Y) for a fixed number of bins m=2,3...m=2,3... which can be conveniently be found without multiobjective optimization. We apply this method to the CIFAR-10, MNIST and Fashion-MNIST datasets, illustrating how it can be interpreted as an information-theoretically optimal image clustering algorithm.

View on arXiv
Comments on this paper