This work examines risk bounds for nonparametric distributional regression estimators. For convex-constrained distributional regression, general upper bounds are established for the continuous ranked probability score (CRPS) and the worst-case mean squared error (MSE) across the domain. These theoretical results are applied to isotonic and trend filtering distributional regression, yielding convergence rates consistent with those for mean estimation. Furthermore, a general upper bound is derived for distributional regression under non-convex constraints, with a specific application to neural network-based estimators. Comprehensive experiments on both simulated and real data validate the theoretical contributions, demonstrating their practical effectiveness.
View on arXiv@article{padilla2025_2505.09075, title={ Risk Bounds For Distributional Regression }, author={ Carlos Misael Madrid Padilla and Oscar Hernan Madrid Padilla and Sabyasachi Chatterjee }, journal={arXiv preprint arXiv:2505.09075}, year={ 2025 } }