Weighted Risk Invariance: Domain Generalization under Invariant Feature Shift

Learning models whose predictions are invariant under multiple environments is a promising approach for out-of-distribution generalization. Such models are trained to extract features where the conditional distribution of the label given the extracted features does not change across environments. Invariant models are also supposed to generalize to shifts in the marginal distribution of the extracted features , a type of shift we call an . However, we show that proposed methods for learning invariant models underperform under invariant covariate shift, either failing to learn invariant modelseven for data generated from simple and well-studied linear-Gaussian modelsor having poor finite-sample performance. To alleviate these problems, we propose (WRI). Our framework is based on imposing invariance of the loss across environments subject to appropriate reweightings of the training examples. We show that WRI provably learns invariant models, i.e. discards spurious correlations, in linear-Gaussian settings. We propose a practical algorithm to implement WRI by learning the density and the model parameters simultaneously, and we demonstrate empirically that WRI outperforms previous invariant learning methods under invariant covariate shift.
View on arXiv