Dimension-Free Decision Calibration for Nonlinear Loss Functions

When model predictions inform downstream decision making, a natural question is under what conditions can the decision-makers simply respond to the predictions as if they were the true outcomes. Calibration suffices to guarantee that simple best-response to predictions is optimal. However, calibration for high-dimensional prediction outcome spaces requires exponential computational and statistical complexity. The recent relaxation known as decision calibration ensures the optimality of the simple best-response rule while requiring only polynomial sample complexity in the dimension of outcomes. However, known results on calibration and decision calibration crucially rely on linear loss functions for establishing best-response optimality. A natural approach to handle nonlinear losses is to map outcomes into a feature space of dimension , then approximate losses with linear functions of . Unfortunately, even simple classes of nonlinear functions can demand exponentially large or infinite feature dimensions . A key open problem is whether it is possible to achieve decision calibration with sample complexity independent of~. We begin with a negative result: even verifying decision calibration under standard deterministic best response inherently requires sample complexity polynomial in~. Motivated by this lower bound, we investigate a smooth version of decision calibration in which decision-makers follow a smooth best-response. This smooth relaxation enables dimension-free decision calibration algorithms. We introduce algorithms that, given samples and any initial predictor~, can efficiently post-process it to satisfy decision calibration without worsening accuracy. Our algorithms apply broadly to function classes that can be well-approximated by bounded-norm functions in (possibly infinite-dimensional) separable RKHS.
View on arXiv@article{tang2025_2504.15615, title={ Dimension-Free Decision Calibration for Nonlinear Loss Functions }, author={ Jingwu Tang and Jiayun Wu and Zhiwei Steven Wu and Jiahao Zhang }, journal={arXiv preprint arXiv:2504.15615}, year={ 2025 } }