On training locally adaptive CP

We address the problem of making Conformal Prediction (CP) intervals locally adaptive. Most existing methods focus on approximating the object-conditional validity of the intervals by partitioning or re-weighting the calibration set. Our strategy is new and conceptually different. Instead of re-weighting the calibration data, we redefine the conformity measure through a trainable change of variables, , that depends explicitly on the object attributes, . Under certain conditions and if is monotonic in for any , the transformations produce prediction intervals that are guaranteed to be marginally valid and have -dependent sizes. We describe how to parameterize and train to maximize the interval efficiency. Contrary to other CP-aware training methods, the objective function is smooth and can be minimized through standard gradient methods without approximations.
View on arXiv