13
129

The Information Bottleneck Problem and Its Applications in Machine Learning

Abstract

Inference capabilities of machine learning (ML) systems skyrocketed in recent years, now playing a pivotal role in various aspect of society. The goal in statistical learning is to use data to obtain simple algorithms for predicting a random variable YY from a correlated observation XX. Since the dimension of XX is typically huge, computationally feasible solutions should summarize it into a lower-dimensional feature vector TT, from which YY is predicted. The algorithm will successfully make the prediction if TT is a good proxy of YY, despite the said dimensionality-reduction. A myriad of ML algorithms (mostly employing deep learning (DL)) for finding such representations TT based on real-world data are now available. While these methods are often effective in practice, their success is hindered by the lack of a comprehensive theory to explain it. The information bottleneck (IB) theory recently emerged as a bold information-theoretic paradigm for analyzing DL systems. Adopting mutual information as the figure of merit, it suggests that the best representation TT should be maximally informative about YY while minimizing the mutual information with XX. In this tutorial we survey the information-theoretic origins of this abstract principle, and its recent impact on DL. For the latter, we cover implications of the IB problem on DL theory, as well as practical algorithms inspired by it. Our goal is to provide a unified and cohesive description. A clear view of current knowledge is particularly important for further leveraging IB and other information-theoretic ideas to study DL models.

View on arXiv
Comments on this paper