109

Convex Regression with a Penalty

Main:21 Pages
5 Figures
Bibliography:4 Pages
2 Tables
Abstract

A common way to estimate an unknown convex regression function f0:ΩRdRf_0: \Omega \subset \mathbb{R}^d \rightarrow \mathbb{R} from a set of nn noisy observations is to fit a convex function that minimizes the sum of squared errors. However, this estimator is known for its tendency to overfit near the boundary of Ω\Omega, posing significant challenges in real-world applications. In this paper, we introduce a new estimator of f0f_0 that avoids this overfitting by minimizing a penalty on the subgradient while enforcing an upper bound sns_n on the sum of squared errors. The key advantage of this method is that sns_n can be directly estimated from the data. We establish the uniform almost sure consistency of the proposed estimator and its subgradient over Ω\Omega as nn \rightarrow \infty and derive convergence rates. The effectiveness of our estimator is illustrated through its application to estimating waiting times in a single-server queue.

View on arXiv
Comments on this paper