Convex Regression with a Penalty
A common way to estimate an unknown convex regression function from a set of noisy observations is to fit a convex function that minimizes the sum of squared errors. However, this estimator is known for its tendency to overfit near the boundary of , posing significant challenges in real-world applications. In this paper, we introduce a new estimator of that avoids this overfitting by minimizing a penalty on the subgradient while enforcing an upper bound on the sum of squared errors. The key advantage of this method is that can be directly estimated from the data. We establish the uniform almost sure consistency of the proposed estimator and its subgradient over as and derive convergence rates. The effectiveness of our estimator is illustrated through its application to estimating waiting times in a single-server queue.
View on arXiv