We use the theory of normal variance-mean mixtures to derive a data-augmentation scheme that unifies a wide class of statistical models under a single framework. This generalizes existing theory on normal variance mixtures for priors in regression and classification. It also allows variants of the expectation-maximization algorithm to be brought to bear on a much wider range of models than previously appreciated. We demonstate the resulting gains in accuracy and stability on several examples, including sparse quantile regression and binary logistic regression.
View on arXiv