76
88

Univariate Mean Change Point Detection: Penalization, CUSUM and Optimality

Abstract

The problem of univariate mean change point detection and localization based on a sequence of nn independent observations with piecewise constant means has been intensively studied for more than half century, and serves as a blueprint for change point problems in more complex settings. We provide a complete characterization of this classical problem in a general framework in which the upper bound on the noise variance σ2\sigma^2, the minimal spacing Δ\Delta between two consecutive change points and the minimal magnitude of the changes κ\kappa, are allowed to vary with nn. We first show that consistent localization of the change points when the signal-to-noise ratio κΔσ\frac{\kappa \sqrt{\Delta}}{\sigma} is uniformly bounded from above is impossible. In contrast, when κΔσ\frac{\kappa \sqrt{\Delta}}{\sigma} is diverging in nn at any arbitrary slow rate, we demonstrate that two computationally-efficient change point estimators, one based on the solution to an 0\ell_0-penalized least squares problem and the other on the popular WBS algorithm, are both consistent and achieve a localization rate of the order σ2κ2log(n)\frac{\sigma^2}{\kappa^2} \log(n). We further show that such rate is minimax optimal, up to a log(n)\log(n) term.

View on arXiv
Comments on this paper