ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.07030
  4. Cited By
SGD Learns One-Layer Networks in WGANs
v1v2 (latest)

SGD Learns One-Layer Networks in WGANs

International Conference on Machine Learning (ICML), 2019
15 October 2019
Qi Lei
Jason D. Lee
A. Dimakis
C. Daskalakis
    GAN
ArXiv (abs)PDFHTML

Papers citing "SGD Learns One-Layer Networks in WGANs"

26 / 26 papers shown
Title
Two-Timescale Gradient Descent Ascent Algorithms for Nonconvex Minimax Optimization
Two-Timescale Gradient Descent Ascent Algorithms for Nonconvex Minimax Optimization
Tianyi Lin
Chi Jin
Michael I. Jordan
397
14
0
28 Jan 2025
A Mathematical Framework for Learning Probability Distributions
A Mathematical Framework for Learning Probability DistributionsJournal of Machine Learning (JML), 2022
Hongkang Yang
273
8
0
22 Dec 2022
Dissecting adaptive methods in GANs
Dissecting adaptive methods in GANs
Samy Jelassi
David Dobre
A. Mensch
Yuanzhi Li
Gauthier Gidel
116
5
0
09 Oct 2022
Learning (Very) Simple Generative Models Is Hard
Learning (Very) Simple Generative Models Is HardNeural Information Processing Systems (NeurIPS), 2022
Sitan Chen
Jungshian Li
Yuanzhi Li
132
11
0
31 May 2022
Generalization Bounds of Nonconvex-(Strongly)-Concave Stochastic Minimax
  Optimization
Generalization Bounds of Nonconvex-(Strongly)-Concave Stochastic Minimax OptimizationInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Siqi Zhang
Yifan Hu
Liang Zhang
Niao He
214
4
0
28 May 2022
On the Nash equilibrium of moment-matching GANs for stationary Gaussian
  processes
On the Nash equilibrium of moment-matching GANs for stationary Gaussian processesMathematical and Scientific Machine Learning (MSML), 2022
Sixin Zhang
GAN
230
2
0
14 Mar 2022
Minimax Optimality (Probably) Doesn't Imply Distribution Learning for
  GANs
Minimax Optimality (Probably) Doesn't Imply Distribution Learning for GANsInternational Conference on Learning Representations (ICLR), 2022
Sitan Chen
Jungshian Li
Yuanzhi Li
Raghu Meka
GAN
175
6
0
18 Jan 2022
Faster Single-loop Algorithms for Minimax Optimization without Strong
  Concavity
Faster Single-loop Algorithms for Minimax Optimization without Strong Concavity
Junchi Yang
Antonio Orvieto
Aurelien Lucchi
Niao He
217
72
0
10 Dec 2021
On the Optimization Landscape of Maximum Mean Discrepancy
On the Optimization Landscape of Maximum Mean Discrepancy
A. Itai
Amir Globerson
A. Wiesel
132
1
0
26 Oct 2021
Reversible Gromov-Monge Sampler for Simulation-Based Inference
Reversible Gromov-Monge Sampler for Simulation-Based Inference
Y. Hur
Wenxuan Guo
Tengyuan Liang
199
12
0
28 Sep 2021
Generalization Error of GAN from the Discriminator's Perspective
Generalization Error of GAN from the Discriminator's PerspectiveResearch in the Mathematical Sciences (Res. Math. Sci.), 2021
Hongkang Yang
Weinan E
GAN
172
15
0
08 Jul 2021
Understanding Overparameterization in Generative Adversarial Networks
Understanding Overparameterization in Generative Adversarial NetworksInternational Conference on Learning Representations (ICLR), 2021
Yogesh Balaji
M. Sajedi
Neha Kalibhat
Mucong Ding
Dominik Stöger
Mahdi Soltanolkotabi
Soheil Feizi
AI4CE
182
23
0
12 Apr 2021
The Complexity of Nonconvex-Strongly-Concave Minimax Optimization
The Complexity of Nonconvex-Strongly-Concave Minimax OptimizationConference on Uncertainty in Artificial Intelligence (UAI), 2021
Siqi Zhang
Junchi Yang
Cristóbal Guzmán
Negar Kiyavash
Niao He
176
65
0
29 Mar 2021
WGAN with an Infinitely Wide Generator Has No Spurious Stationary Points
WGAN with an Infinitely Wide Generator Has No Spurious Stationary PointsInternational Conference on Machine Learning (ICML), 2021
Albert No
Taeho Yoon
Sehyun Kwon
Ernest K. Ryu
GAN
147
2
0
15 Feb 2021
Solving Min-Max Optimization with Hidden Structure via Gradient Descent
  Ascent
Solving Min-Max Optimization with Hidden Structure via Gradient Descent AscentNeural Information Processing Systems (NeurIPS), 2021
Lampros Flokas
Emmanouil-Vasileios Vlatakis-Gkaragkounis
Georgios Piliouras
MLT
186
15
0
13 Jan 2021
Convergence and Sample Complexity of SGD in GANs
Convergence and Sample Complexity of SGD in GANs
Vasilis Kontonis
Sihan Liu
Christos Tzamos
138
3
0
01 Dec 2020
Generalization and Memorization: The Bias Potential Model
Generalization and Memorization: The Bias Potential ModelMathematical and Scientific Machine Learning (MSML), 2020
Hongkang Yang
E. Weinan
242
13
0
29 Nov 2020
Towards a Better Global Loss Landscape of GANs
Towards a Better Global Loss Landscape of GANs
Tian Ding
Tiantian Fang
Alex Schwing
GAN
195
31
0
10 Nov 2020
Why Adversarial Interaction Creates Non-Homogeneous Patterns: A
  Pseudo-Reaction-Diffusion Model for Turing Instability
Why Adversarial Interaction Creates Non-Homogeneous Patterns: A Pseudo-Reaction-Diffusion Model for Turing InstabilityAAAI Conference on Artificial Intelligence (AAAI), 2020
Litu Rout
AAML
164
1
0
01 Oct 2020
GAT-GMM: Generative Adversarial Training for Gaussian Mixture Models
GAT-GMM: Generative Adversarial Training for Gaussian Mixture Models
Farzan Farnia
William Wang
Subhro Das
Ali Jadbabaie
GAN
157
8
0
18 Jun 2020
Minimax Estimation of Conditional Moment Models
Minimax Estimation of Conditional Moment ModelsNeural Information Processing Systems (NeurIPS), 2020
Nishanth Dikkala
Greg Lewis
Lester W. Mackey
Vasilis Syrgkanis
423
109
0
12 Jun 2020
Making Method of Moments Great Again? -- How can GANs learn
  distributions
Making Method of Moments Great Again? -- How can GANs learn distributions
Yuanzhi Li
Zehao Dou
GAN
216
5
0
09 Mar 2020
GANs May Have No Nash Equilibria
GANs May Have No Nash EquilibriaInternational Conference on Machine Learning (ICML), 2020
Farzan Farnia
Asuman Ozdaglar
GAN
167
44
0
21 Feb 2020
A mean-field analysis of two-player zero-sum games
A mean-field analysis of two-player zero-sum gamesNeural Information Processing Systems (NeurIPS), 2020
Carles Domingo-Enrich
Samy Jelassi
A. Mensch
Grant M. Rotskoff
Joan Bruna
MLT
248
48
0
14 Feb 2020
How Well Generative Adversarial Networks Learn Distributions
How Well Generative Adversarial Networks Learn DistributionsJournal of machine learning research (JMLR), 2018
Tengyuan Liang
GAN
229
108
0
07 Nov 2018
Adversarial Discriminative Domain Adaptation
Adversarial Discriminative Domain AdaptationComputer Vision and Pattern Recognition (CVPR), 2017
Eric Tzeng
Judy Hoffman
Kate Saenko
Trevor Darrell
GANOOD
624
5,004
0
17 Feb 2017
1