Near-optimality of sequential joint detection and estimation via online mirror descent

Abstract
Sequential hypothesis test and change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. We show that for such problems, detection procedures based on sequential likelihood ratios with simple online mirror descent estimators are nearly optimal. This is a blessing, since although the well-known generalized likelihood ratio statistics are optimal theoretically, but their exact computation usually requires infinite memory of historical data. We prove the optimality by making a connection of sequential analysis with online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical examples validate our theory.
View on arXivComments on this paper