Fundamental Limits and Tradeoffs in Invariant Representation Learning
- FaML
Many machine learning applications, e.g., privacy-preserving learning, algorithmic fairness and domain adaptation/generalization, involve learning the so-called invariant representations that achieve two competing goals: To maximize information or accuracy with respect to a target while simultaneously maximizing invariance or independence with respect to a set of protected features (e.g.\ for fairness, privacy, etc). Despite its abundant applications in the aforementioned domains, theoretical understanding on the limits and tradeoffs of invariant representations is still severely lacking. In this paper, we provide an information theoretic analysis of this general and important problem under both classification and regression settings. In both cases, we analyze the inherent tradeoffs between accuracy and invariance by providing a geometric characterization of the feasible region in the information plane, where we connect the geometric properties of this feasible region to the fundamental limitations of the tradeoff problem. In the regression setting, we further give a complete and exact characterization of the frontier between accuracy and invariance. Although our contributions are mainly theoretical, we also demonstrate the practical applications of our results in certifying the suboptimality of certain representation learning algorithms in both classification and regression tasks. Our results shed new light on this fundamental problem by providing insights on the interplay between accuracy and invariance. These results deepen our understanding of this fundamental problem and may be useful in guiding the design of future representation learning algorithms.
View on arXiv