ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.08961
27
16
v1v2 (latest)

Pareto-optimal data compression for binary classification tasks

23 August 2019
Max Tegmark
Tailin Wu
ArXiv (abs)PDFHTML
Abstract

The goal of lossy data compression is to reduce the storage cost of a data set XXX while retaining as much information as possible about something (YYY) that you care about. For example, what aspects of an image XXX contain the most information about whether it depicts a cat? Mathematically, this corresponds to finding a mapping X→Z≡f(X)X\to Z\equiv f(X)X→Z≡f(X) that maximizes the mutual information I(Z,Y)I(Z,Y)I(Z,Y) while the entropy H(Z)H(Z)H(Z) is kept below some fixed threshold. We present a method for mapping out the Pareto frontier for classification tasks, reflecting the tradeoff between retained entropy and class information. We first show how a random variable XXX (an image, say) drawn from a class Y∈{1,...,n}Y\in\{1,...,n\}Y∈{1,...,n} can be distilled into a vector W=f(X)∈Rn−1W=f(X)\in \mathbb{R}^{n-1}W=f(X)∈Rn−1 losslessly, so that I(W,Y)=I(X,Y)I(W,Y)=I(X,Y)I(W,Y)=I(X,Y); for example, for a binary classification task of cats and dogs, each image XXX is mapped into a single real number WWW retaining all information that helps distinguish cats from dogs. For the n=2n=2n=2 case of binary classification, we then show how WWW can be further compressed into a discrete variable Z=gβ(W)∈{1,...,mβ}Z=g_\beta(W)\in\{1,...,m_\beta\}Z=gβ​(W)∈{1,...,mβ​} by binning WWW into mβm_\betamβ​ bins, in such a way that varying the parameter β\betaβ sweeps out the full Pareto frontier, solving a generalization of the Discrete Information Bottleneck (DIB) problem. We argue that the most interesting points on this frontier are "corners" maximizing I(Z,Y)I(Z,Y)I(Z,Y) for a fixed number of bins m=2,3...m=2,3...m=2,3... which can be conveniently be found without multiobjective optimization. We apply this method to the CIFAR-10, MNIST and Fashion-MNIST datasets, illustrating how it can be interpreted as an information-theoretically optimal image clustering algorithm.

View on arXiv
Comments on this paper