Out-of-distribution Generalization for Total Variation based Invariant Risk Minimization

Invariant risk minimization is an important general machine learning framework that has recently been interpreted as a total variation model (IRM-TV). However, how to improve out-of-distribution (OOD) generalization in the IRM-TV setting remains unsolved. In this paper, we extend IRM-TV to a Lagrangian multiplier model named OOD-TV-IRM. We find that the autonomous TV penalty hyperparameter is exactly the Lagrangian multiplier. Thus OOD-TV-IRM is essentially a primal-dual optimization model, where the primal optimization minimizes the entire invariant risk and the dual optimization strengthens the TV penalty. The objective is to reach a semi-Nash equilibrium where the balance between the training loss and OOD generalization is maintained. We also develop a convergent primal-dual algorithm that facilitates an adversarial learning scheme. Experimental results show that OOD-TV-IRM outperforms IRM-TV in most situations.
View on arXiv@article{wang2025_2502.19665, title={ Out-of-distribution Generalization for Total Variation based Invariant Risk Minimization }, author={ Yuanchao Wang and Zhao-Rong Lai and Tianqi Zhong }, journal={arXiv preprint arXiv:2502.19665}, year={ 2025 } }