All Papers
Title |
|---|
Title |
|---|

The expectation-maximization (EM) algorithm is a powerful computational technique for finding the maximum likelihood estimates for parametric models when the data are not fully observed. The EM is best suited for situations where the expectation in each E-step and the maximization in each M-step are straightforward. A difficulty with the implementation of the EM algorithm is that each E-step requires the integration of the posterior log-likelihood function. This can be overcome by the Monte Carlo EM (MCEM) algorithm. This MCEM uses a random sample to estimate the integral in each E-step. But this MCEM converges very slowly to the true integral, which causes computational burden and instability. In this paper we present a quantile implementation of the expectation-maximization (QEM) algorithm. This proposed method shows a faster convergence and greater stability. The performance of the proposed method and its applications are numerically illustrated through Monte Carlo simulations and several examples.
View on arXiv