ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.11592
  4. Cited By
On Learning Mixtures of Well-Separated Gaussians

On Learning Mixtures of Well-Separated Gaussians

31 October 2017
O. Regev
Aravindan Vijayaraghavan
ArXiv (abs)PDFHTML

Papers citing "On Learning Mixtures of Well-Separated Gaussians"

25 / 25 papers shown
Title
How does promoting the minority fraction affect generalization? A
  theoretical study of the one-hidden-layer neural network on group imbalance
How does promoting the minority fraction affect generalization? A theoretical study of the one-hidden-layer neural network on group imbalance
Hongkang Li
Shuai Zhang
Yihua Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
93
4
0
12 Mar 2024
Gaussian Mixture Identifiability from degree 6 Moments
Gaussian Mixture Identifiability from degree 6 Moments
Alexander Taveira Blomenhofer
57
2
0
07 Jul 2023
Private estimation algorithms for stochastic block models and mixture
  models
Private estimation algorithms for stochastic block models and mixture models
Hongjie Chen
Vincent Cohen-Addad
Tommaso dÓrsi
Alessandro Epasto
Jacob Imola
David Steurer
Stefan Tiegel
FedML
87
21
0
11 Jan 2023
Sample Complexity Bounds for Learning High-dimensional Simplices in
  Noisy Regimes
Sample Complexity Bounds for Learning High-dimensional Simplices in Noisy Regimes
Amir Saberi
Amir Najafi
S. Motahari
B. Khalaj
59
5
0
09 Sep 2022
List-Decodable Sparse Mean Estimation via Difference-of-Pairs Filtering
List-Decodable Sparse Mean Estimation via Difference-of-Pairs Filtering
Ilias Diakonikolas
D. Kane
Sushrut Karmalkar
Ankit Pensia
Thanasis Pittas
70
14
0
10 Jun 2022
Continuous LWE is as Hard as LWE & Applications to Learning Gaussian
  Mixtures
Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures
A. Gupte
Neekon Vafa
Vinod Vaikuntanathan
101
39
0
06 Apr 2022
Differentially-Private Clustering of Easy Instances
Differentially-Private Clustering of Easy Instances
E. Cohen
Haim Kaplan
Yishay Mansour
Uri Stemmer
Eliad Tsfadia
71
25
0
29 Dec 2021
Clustering Mixtures with Almost Optimal Separation in Polynomial Time
Clustering Mixtures with Almost Optimal Separation in Polynomial Time
Jingkai Li
Allen Liu
59
23
0
01 Dec 2021
Uniform Consistency in Nonparametric Mixture Models
Uniform Consistency in Nonparametric Mixture Models
Bryon Aragam
Ruiyi Yang
64
6
0
31 Aug 2021
SoS Degree Reduction with Applications to Clustering and Robust Moment
  Estimation
SoS Degree Reduction with Applications to Clustering and Robust Moment Estimation
David Steurer
Stefan Tiegel
63
10
0
05 Jan 2021
Improved Convergence Guarantees for Learning Gaussian Mixture Models by
  EM and Gradient EM
Improved Convergence Guarantees for Learning Gaussian Mixture Models by EM and Gradient EM
Nimrod Segol
B. Nadler
87
13
0
03 Jan 2021
Small Covers for Near-Zero Sets of Polynomials and Learning Latent
  Variable Models
Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models
Ilias Diakonikolas
D. Kane
77
33
0
14 Dec 2020
Sparse PCA: Algorithms, Adversarial Perturbations and Certificates
Sparse PCA: Algorithms, Adversarial Perturbations and Certificates
Tommaso dÓrsi
Pravesh Kothari
Gleb Novikov
David Steurer
AAML
112
25
0
12 Nov 2020
Continuous LWE
Continuous LWE
Joan Bruna
O. Regev
M. Song
Yi Tang
43
51
0
19 May 2020
Learning sums of powers of low-degree polynomials in the non-degenerate
  case
Learning sums of powers of low-degree polynomials in the non-degenerate case
A. Garg
N. Kayal
Chandan Saha
58
24
0
15 Apr 2020
The EM Algorithm gives Sample-Optimality for Learning Mixtures of
  Well-Separated Gaussians
The EM Algorithm gives Sample-Optimality for Learning Mixtures of Well-Separated Gaussians
Jeongyeol Kwon
Constantine Caramanis
48
5
0
02 Feb 2020
Differentially Private Algorithms for Learning Mixtures of Separated
  Gaussians
Differentially Private Algorithms for Learning Mixtures of Separated Gaussians
Gautam Kamath
Or Sheffet
Vikrant Singhal
Jonathan R. Ullman
FedML
87
48
0
09 Sep 2019
Private Hypothesis Selection
Private Hypothesis Selection
Mark Bun
Gautam Kamath
Thomas Steinke
Zhiwei Steven Wu
89
91
0
30 May 2019
EM Converges for a Mixture of Many Linear Regressions
EM Converges for a Mixture of Many Linear Regressions
Jeongyeol Kwon
Constantine Caramanis
72
40
0
28 May 2019
Scalable K-Medoids via True Error Bound and Familywise Bandits
Scalable K-Medoids via True Error Bound and Familywise Bandits
A. Babu
Saurabh Agarwal
Sudarshan Babu
Hariharan Chandrasekaran
17
0
0
27 May 2019
Tight Kernel Query Complexity of Kernel Ridge Regression and Kernel
  $k$-means Clustering
Tight Kernel Query Complexity of Kernel Ridge Regression and Kernel kkk-means Clustering
Manuel Fernández
David P. Woodruff
T. Yasuda
55
7
0
15 May 2019
Iterative Least Trimmed Squares for Mixed Linear Regression
Iterative Least Trimmed Squares for Mixed Linear Regression
Yanyao Shen
Sujay Sanghavi
80
25
0
10 Feb 2019
Partial recovery bounds for clustering with the relaxed $K$means
Partial recovery bounds for clustering with the relaxed KKKmeans
Christophe Giraud
Nicolas Verzélen
265
60
0
19 Jul 2018
Better Agnostic Clustering Via Relaxed Tensor Norms
Better Agnostic Clustering Via Relaxed Tensor Norms
Pravesh Kothari
Jacob Steinhardt
133
61
0
20 Nov 2017
List-Decodable Robust Mean Estimation and Learning Mixtures of Spherical
  Gaussians
List-Decodable Robust Mean Estimation and Learning Mixtures of Spherical Gaussians
Ilias Diakonikolas
D. Kane
Alistair Stewart
139
147
0
20 Nov 2017
1