84
0
v1v2 (latest)

Robust Invariant Representation Learning by Distribution Extrapolation

Main:12 Pages
7 Figures
Bibliography:4 Pages
7 Tables
Appendix:3 Pages
Abstract

Invariant risk minimization (IRM) aims to enable out-of-distribution (OOD) generalization in deep learning by learning invariant representations. As IRM poses an inherently challenging bi-level optimization problem, most existing approaches -- including IRMv1 -- adopt penalty-based single-level approximations. However, empirical studies consistently show that these methods often fail to outperform well-tuned empirical risk minimization (ERM), highlighting the need for more robust IRM implementations. This work theoretically identifies a key limitation common to many IRM variants: their penalty terms are highly sensitive to limited environment diversity and over-parameterization, resulting in performance degradation. To address this issue, a novel extrapolation-based framework is proposed that enhances environmental diversity by augmenting the IRM penalty through synthetic distributional shifts. Extensive experiments -- ranging from synthetic setups to realistic, over-parameterized scenarios -- demonstrate that the proposed method consistently outperforms state-of-the-art IRM variants, validating its effectiveness and robustness.

View on arXiv
@article{yoshida2025_2505.16126,
  title={ Robust Invariant Representation Learning by Distribution Extrapolation },
  author={ Kotaro Yoshida and Konstantinos Slavakis },
  journal={arXiv preprint arXiv:2505.16126},
  year={ 2025 }
}
Comments on this paper