Improved and Oracle-Efficient Online -Multicalibration

We study \emph{online multicalibration}, a framework for ensuring calibrated predictions across multiple groups in adversarial settings, across rounds. Although online calibration is typically studied in the norm, prior approaches to online multicalibration have taken the indirect approach of obtaining rates in other norms (such as and ) and then transferred these guarantees to at additional loss. In contrast, we propose a direct method that achieves improved and oracle-efficient rates of and respectively, for online -multicalibration. Our key insight is a novel reduction of online \(\ell_1\)-multicalibration to an online learning problem with product-based rewards, which we refer to as \emph{online linear-product optimization} ().To obtain the improved rate of , we introduce a linearization of and design a no-regret algorithm for this linearized problem. Although this method guarantees the desired sublinear rate (nearly matching the best rate for online calibration), it is computationally expensive when the group family \(\mathcal{H}\) is large or infinite, since it enumerates all possible groups. To address scalability, we propose a second approach to that makes only a polynomial number of calls to an offline optimization (\emph{multicalibration evaluation}) oracle, resulting in \emph{oracle-efficient} online \(\ell_1\)-multicalibration with a rate of . Our framework also extends to certain infinite families of groups (e.g., all linear functions on the context space) by exploiting a -Lipschitz property of the \(\ell_1\)-multicalibration error with respect to \(\mathcal{H}\).
View on arXiv@article{ghuge2025_2505.17365, title={ Improved and Oracle-Efficient Online $\ell_1$-Multicalibration }, author={ Rohan Ghuge and Vidya Muthukumar and Sahil Singla }, journal={arXiv preprint arXiv:2505.17365}, year={ 2025 } }