ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.10575
258
25
v1v2 (latest)

Near-Optimal Sample Complexity Bounds for Maximum Likelihood Estimation of Multivariate Log-concave Densities

Annual Conference Computational Learning Theory (COLT), 2018
28 February 2018
Timothy Carpenter
Ilias Diakonikolas
Anastasios Sidiropoulos
Alistair Stewart
ArXiv (abs)PDFHTML
Abstract

We study the problem of learning multivariate log-concave densities with respect to a global loss function. We obtain the first upper bound on the sample complexity of the maximum likelihood estimator (MLE) for a log-concave density on Rd\mathbb{R}^dRd, for all d≥4d \geq 4d≥4. Prior to this work, no finite sample upper bound was known for this estimator in more than 333 dimensions. In more detail, we prove that for any d≥1d \geq 1d≥1 and ϵ>0\epsilon>0ϵ>0, given O~d((1/ϵ)(d+3)/2)\tilde{O}_d((1/\epsilon)^{(d+3)/2})O~d​((1/ϵ)(d+3)/2) samples drawn from an unknown log-concave density f0f_0f0​ on Rd\mathbb{R}^dRd, the MLE outputs a hypothesis hhh that with high probability is ϵ\epsilonϵ-close to f0f_0f0​, in squared Hellinger loss. A sample complexity lower bound of Ωd((1/ϵ)(d+1)/2)\Omega_d((1/\epsilon)^{(d+1)/2})Ωd​((1/ϵ)(d+1)/2) was previously known for any learning algorithm that achieves this guarantee. We thus establish that the sample complexity of the log-concave MLE is near-optimal, up to an O~(1/ϵ)\tilde{O}(1/\epsilon)O~(1/ϵ) factor.

View on arXiv
Comments on this paper