25
13

Are deviations in a gradually varying mean relevant? A testing approach based on sup-norm estimators

Abstract

Classical change point analysis aims at (1) detecting abrupt changes in the mean of a possibly non-stationary time series and at (2) identifying regions where the mean exhibits a piecewise constant behavior. In many applications however, it is more reasonable to assume that the mean changes gradually in a smooth way. Those gradual changes may either be non-relevant (i.e., small), or relevant for a specific problem at hand, and the present paper presents statistical methodology to detect the latter. More precisely, we consider the common nonparametric regression model Xi=μ(i/n)+εiX_{i} = \mu (i/n) + \varepsilon_{i} with possibly non-stationary errors and propose a test for the null hypothesis that the maximum absolute deviation of the regression function μ\mu from a functional g(μ)g (\mu ) (such as the value μ(0)\mu (0) or the integral 01μ(t)dt\int_{0}^{1} \mu (t) dt) is smaller than a given threshold on a given interval [x0,x1][0,1][x_{0},x_{1}] \subseteq [0,1]. A test for this type of hypotheses is developed using an appropriate estimator, say d^,n\hat d_{\infty, n}, for the maximum deviation d=supt[x0,x1]μ(t)g(μ) d_{\infty}= \sup_{t \in [x_{0},x_{1}]} |\mu (t) - g( \mu) |. We derive the limiting distribution of an appropriately standardized version of d^,n\hat d_{\infty,n}, where the standardization depends on the Lebesgue measure of the set of extremal points of the function μ()g(μ)\mu(\cdot)-g(\mu). A refined procedure based on an estimate of this set is developed and its consistency is proved. The results are illustrated by means of a simulation study and a data example.

View on arXiv
Comments on this paper