252

Challenges in Bayesian Adaptive Data Analysis

Abstract

Traditional statistical analysis requires that the analysis process and data are independent. By contrast, the new field of adaptive data analysis hopes to understand and provide algorithms and accuracy guarantees for research as it is commonly performed in practice, as an iterative process of interacting repeatedly with the same data set. Previous work has defined a model with a rather strong lower bound on sample complexity in terms of the number of queries, nqn\sim\sqrt q, arguing that adaptive data analysis is much harder than static data analysis, where nlogqn\sim\log q is possible. Instead, we argue that those strong lower bounds point to a bug in the model, an information asymmetry with no basis in the typical application. In its place, we propose a new Bayesian version of the problem without this unnecessary asymmetry. The previous lower bounds are no longer valid, which offers the possibility for stronger results. As a first contribution to this model, though, we show that a large family of methods, including all previously proposed algorithms, cannot achieve the static dependence of nlogqn\sim\log q, but instead require polylogarithmically many samples. These early results suggest that adaptive data analysis is harder than static data analysis even with information symmetry, but leave open many possibilities for new developments.

View on arXiv
Comments on this paper