ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.05315
61
25
v1v2v3v4 (latest)

The Log-Concave Maximum Likelihood Estimator is Optimal in High Dimensions

13 March 2019
Gil Kur
Y. Dagan
ArXiv (abs)PDFHTML
Abstract

We study the problem of learning a ddd-dimensional log-concave distribution from nnn i.i.d. samples with respect to both the squared Hellinger and the total variation distances. We show that for all d≥4d \ge 4d≥4 the maximum likelihood estimator achieves an optimal risk (up to a logarithmic factor) of Od(n−2/(d+1)log⁡(n))O_d(n^{-2/(d+1)}\log(n))Od​(n−2/(d+1)log(n)) in terms of squared Hellinger distance. Previously, the optimality of the MLE was known only for d≤3d\le 3d≤3. Additionally, we show that the metric plays a key role, by proving that the minimax risk is at least Ωd(n−2/(d+4))\Omega_d(n^{-2/(d+4)})Ωd​(n−2/(d+4)) in terms of the total variation. Finally, we significantly improve the dimensional constant in the best known lower bound on the risk with respect to the squared Hellinger distance, improving the bound from 2−dn−2/(d+1)2^{-d}n^{-2/(d+1)}2−dn−2/(d+1) to Ω(n−2/(d+1))\Omega(n^{-2/(d+1)})Ω(n−2/(d+1)). This implies that estimating a log-concave density up to a fixed accuracy requires a number of samples which is exponential in the dimension.

View on arXiv
Comments on this paper