124

Inefficiency of Data Augmentation for Large Sample Imbalanced Data

Abstract

Many modern applications collect large sample size and highly imbalanced categorical data, with some categories being relatively rare. Bayesian hierarchical models are well motivated in such settings in providing an approach to borrow information to combat data sparsity, while quantifying uncertainty in estimation. However, a fundamental problem is scaling up posterior computation to massive sample sizes. In categorical data models, posterior computation commonly relies on data augmentation Gibbs sampling. In this article, we study computational efficiency of such algorithms in a large sample imbalanced regime, showing that mixing is extremely poor, with a spectral gap that converges to zero at a rate proportional to the square root of sample size or faster. This theoretical result is verified with empirical performance in simulations and an application to a computational advertising data set. In contrast, algorithms that bypass data augmentation show rapid mixing on the same dataset.

View on arXiv
Comments on this paper