13
11

Multicalibrated Partitions for Importance Weights

Abstract

The ratio between the probability that two distributions RR and PP give to points xx are known as importance weights or propensity scores and play a fundamental role in many different fields, most notably, statistics and machine learning. Among its applications, importance weights are central to domain adaptation, anomaly detection, and estimations of various divergences such as the KL divergence. We consider the common setting where RR and PP are only given through samples from each distribution. The vast literature on estimating importance weights is either heuristic, or makes strong assumptions about RR and PP or on the importance weights themselves. In this paper, we explore a computational perspective to the estimation of importance weights, which factors in the limitations and possibilities obtainable with bounded computational resources. We significantly strengthen previous work that use the MaxEntropy approach, that define the importance weights based on a distribution QQ closest to PP, that looks the same as RR on every set CCC \in \mathcal{C}, where C\mathcal{C} may be a huge collection of sets. We show that the MaxEntropy approach may fail to assign high average scores to sets CCC \in \mathcal{C}, even when the average of ground truth weights for the set is evidently large. We similarly show that it may overestimate the average scores to sets CCC \in \mathcal{C}. We therefore formulate Sandwiching bounds as a notion of set-wise accuracy for importance weights. We study these bounds to show that they capture natural completeness and soundness requirements from the weights. We present an efficient algorithm that under standard learnability assumptions computes weights which satisfy these bounds. Our techniques rely on a new notion of multicalibrated partitions of the domain of the distributions, which appear to be useful objects in their own right.

View on arXiv
Comments on this paper