24
27

Diagnosing Model Performance Under Distribution Shift

Abstract

Prediction models can perform poorly when deployed to target distributions different from the training distribution. To understand these operational failure modes, we develop a method, called DIstribution Shift DEcomposition (DISDE), to attribute a drop in performance to different types of distribution shifts. Our approach decomposes the performance drop into terms for 1) an increase in harder but frequently seen examples from training, 2) changes in the relationship between features and outcomes, and 3) poor performance on examples infrequent or unseen during training. These terms are defined by fixing a distribution on XX while varying the conditional distribution of YXY \mid X between training and target, or by fixing the conditional distribution of YXY \mid X while varying the distribution on XX. In order to do this, we define a hypothetical distribution on XX consisting of values common in both training and target, over which it is easy to compare YXY \mid X and thus predictive performance. We estimate performance on this hypothetical distribution via reweighting methods. Empirically, we show how our method can 1) inform potential modeling improvements across distribution shifts for employment prediction on tabular census data, and 2) help to explain why certain domain adaptation methods fail to improve model performance for satellite image classification.

View on arXiv
Comments on this paper