ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1203.5181
55
42

kkk-MLE: A fast algorithm for learning statistical mixture models

23 March 2012
Frank Nielsen
ArXivPDFHTML
Abstract

We describe kkk-MLE, a fast and efficient local search algorithm for learning finite statistical mixtures of exponential families such as Gaussian mixture models. Mixture models are traditionally learned using the expectation-maximization (EM) soft clustering technique that monotonically increases the incomplete (expected complete) likelihood. Given prescribed mixture weights, the hard clustering kkk-MLE algorithm iteratively assigns data to the most likely weighted component and update the component models using Maximum Likelihood Estimators (MLEs). Using the duality between exponential families and Bregman divergences, we prove that the local convergence of the complete likelihood of kkk-MLE follows directly from the convergence of a dual additively weighted Bregman hard clustering. The inner loop of kkk-MLE can be implemented using any kkk-means heuristic like the celebrated Lloyd's batched or Hartigan's greedy swap updates. We then show how to update the mixture weights by minimizing a cross-entropy criterion that implies to update weights by taking the relative proportion of cluster points, and reiterate the mixture parameter update and mixture weight update processes until convergence. Hard EM is interpreted as a special case of kkk-MLE when both the component update and the weight update are performed successively in the inner loop. To initialize kkk-MLE, we propose kkk-MLE++, a careful initialization of kkk-MLE guaranteeing probabilistically a global bound on the best possible complete likelihood.

View on arXiv
Comments on this paper