35
5

Nearly second-order asymptotic optimality of sequential change-point detection with one-sample updates

Abstract

Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the underlying distributions belong to the exponential family, we show that detection procedures based on sequential likelihood ratios with simple one-sample update estimates such as online mirror descent are nearly second-order asymptotic optimal, under some mild conditions for the expected Kullback-Leibler divergence between the estimators and the true parameters. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length) meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. This is a blessing since although the generalized likelihood ratio (GLR) statistics are asymptotically optimal in theory, they cannot be computed recursively and thus the exact computation can be time-consuming. We prove the nearly second-order asymptotic optimality by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory.

View on arXiv
Comments on this paper