Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2012.00780
Cited By
v1
v2
v3
v4 (latest)
Refining Deep Generative Models via Discriminator Gradient Flow
International Conference on Learning Representations (ICLR), 2020
1 December 2020
Abdul Fatir Ansari
Ming Liang Ang
Harold Soh
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Refining Deep Generative Models via Discriminator Gradient Flow"
33 / 33 papers shown
Solving Inverse Problems via Diffusion-Based Priors: An Approximation-Free Ensemble Sampling Approach
Haoxuan Chen
Yinuo Ren
Martin Renqiang Min
Lexing Ying
Zachary Izzo
DiffM
MedIm
527
14
0
04 Jun 2025
Improving Discriminator Guidance in Diffusion Models
Alexandre Verine
Mehdi Inane
Florian Le Bronnec
Benjamin Négrevergne
Y. Chevaleyre
DiffM
356
2
0
20 Mar 2025
Inclusive KL Minimization: A Wasserstein-Fisher-Rao Gradient Flow Perspective
Jia-Jie Zhu
525
4
0
31 Oct 2024
Don't Start from Scratch: Behavioral Refinement via Interpolant-based Policy Diffusion
Kaiqi Chen
Eugene Lim
Kelvin Lin
Yiyang Chen
Harold Soh
DiffM
466
21
0
25 Feb 2024
Scalable Wasserstein Gradient Flow for Generative Modeling through Unbalanced Optimal Transport
Jaemoo Choi
Jaewoong Choi
Myungjoo Kang
474
19
0
08 Feb 2024
Neural Sinkhorn Gradient Flow
Huminhao Zhu
Fangyikang Wang
Chao Zhang
Han Zhao
Hui Qian
253
9
0
25 Jan 2024
Optimal Budgeted Rejection Sampling for Generative Models
International Conference on Artificial Intelligence and Statistics (AISTATS), 2023
Alexandre Verine
Muni Sreenivas Pydi
Benjamin Négrevergne
Y. Chevaleyre
424
5
0
01 Nov 2023
Bridging the Gap Between Variational Inference and Wasserstein Gradient Flows
Mingxuan Yi
Song Liu
DRL
265
10
0
31 Oct 2023
Analyzing and Improving Optimal-Transport-based Adversarial Networks
International Conference on Learning Representations (ICLR), 2023
Jaemoo Choi
Jaewoong Choi
Myungjoo Kang
OT
374
7
0
04 Oct 2023
Module-wise Training of Neural Networks via the Minimizing Movement Scheme
Neural Information Processing Systems (NeurIPS), 2023
Skander Karkar
Bhaskar Sen
Emmanuel de Bezenac
Patrick Gallinari
373
4
0
29 Sep 2023
Diffusion Models with Deterministic Normalizing Flow Priors
Mohsen Zand
Ali Etemad
Michael A. Greenspan
DiffM
466
7
0
03 Sep 2023
Don't be so negative! Score-based Generative Modeling with Oracle-assisted Guidance
International Conference on Machine Learning (ICML), 2023
Saeid Naderiparizi
Xiaoxuan Liang
Setareh Cohan
Berend Zwartsenberg
Frank Wood
DiffM
290
7
0
31 Jul 2023
Insights into Closed-form IPM-GAN Discriminator Guidance for Diffusion Modeling
Aadithya Srikanth
Siddarth Asokan
Nishanth Shetty
C. Seelamantula
384
0
0
02 Jun 2023
Diff-Instruct: A Universal Approach for Transferring Knowledge From Pre-trained Diffusion Models
Neural Information Processing Systems (NeurIPS), 2023
Weijian Luo
Tianyang Hu
Shifeng Zhang
Jiacheng Sun
Zhenguo Li
Zhihua Zhang
356
210
0
29 May 2023
Unifying GANs and Score-Based Diffusion as Generative Particle Models
Neural Information Processing Systems (NeurIPS), 2023
Jean-Yves Franceschi
Mike Gartrell
Ludovic Dos Santos
Thibaut Issenhuth
Emmanuel de Bezenac
Mickaël Chen
A. Rakotomamonjy
DiffM
397
28
0
25 May 2023
Generative Modeling through the Semi-dual Formulation of Unbalanced Optimal Transport
Neural Information Processing Systems (NeurIPS), 2023
Jaemoo Choi
Jaewoong Choi
Myung-joo Kang
OT
681
36
0
24 May 2023
Generative Sliced MMD Flows with Riesz Kernels
International Conference on Learning Representations (ICLR), 2023
J. Hertrich
Christian Wald
Fabian Altekrüger
Paul Hagemann
384
38
0
19 May 2023
TR0N: Translator Networks for 0-Shot Plug-and-Play Conditional Generation
International Conference on Machine Learning (ICML), 2023
Zhaoyan Liu
Noël Vouitsis
S. Gorti
Jimmy Ba
Gabriel Loaiza-Ganem
ViT
387
2
0
26 Apr 2023
A mean-field games laboratory for generative modeling
Benjamin J. Zhang
Markos A. Katsoulakis
663
32
0
26 Apr 2023
Generative Modeling with Flow-Guided Density Ratio Learning
Alvin Heng
Abdul Fatir Ansari
Harold Soh
302
1
0
07 Mar 2023
Learning Probabilistic Models from Generator Latent Spaces with Hat EBM
Neural Information Processing Systems (NeurIPS), 2022
Mitch Hill
Erik Nijkamp
Jonathan Mitchell
Bo Pang
Song-Chun Zhu
839
14
0
29 Oct 2022
Block-wise Training of Residual Networks via the Minimizing Movement Scheme
Skander Karkar
Ibrahim Ayed
Emmanuel de Bézenac
Patrick Gallinari
248
1
0
03 Oct 2022
Maximum Likelihood Training of Implicit Nonlinear Diffusion Models
Neural Information Processing Systems (NeurIPS), 2022
Dongjun Kim
Byeonghu Na
S. Kwon
Dongsoo Lee
Wanmo Kang
Il-Chul Moon
DiffM
720
60
0
27 May 2022
GANs as Gradient Flows that Converge
Journal of machine learning research (JMLR), 2022
Yu‐Jui Huang
Yuchong Zhang
320
6
0
05 May 2022
Truncated Diffusion Probabilistic Models and Diffusion-based Adversarial Auto-Encoders
International Conference on Learning Representations (ICLR), 2022
Huangjie Zheng
Pengcheng He
Weizhu Chen
Mingyuan Zhou
DiffM
344
55
0
19 Feb 2022
Tackling the Generative Learning Trilemma with Denoising Diffusion GANs
Zhisheng Xiao
Karsten Kreis
Arash Vahdat
DiffM
529
701
0
15 Dec 2021
Score-Based Generative Modeling with Critically-Damped Langevin Diffusion
Tim Dockhorn
Arash Vahdat
Karsten Kreis
DiffM
765
273
0
14 Dec 2021
Likelihood Training of Schrödinger Bridge using Forward-Backward SDEs Theory
International Conference on Learning Representations (ICLR), 2021
T. Chen
Guan-Horng Liu
Evangelos A. Theodorou
DiffM
OT
733
239
0
21 Oct 2021
Controllable and Compositional Generation with Latent-Space Energy-Based Models
Neural Information Processing Systems (NeurIPS), 2021
Weili Nie
Arash Vahdat
Anima Anandkumar
289
87
0
21 Oct 2021
Rethinking Multidimensional Discriminator Output for Generative Adversarial Networks
M. Dai
Haibin Hang
A. Srivastava
277
3
0
08 Sep 2021
BIGRoC: Boosting Image Generation via a Robust Classifier
Roy Ganz
Michael Elad
291
11
0
08 Aug 2021
Score-based Generative Modeling in Latent Space
Neural Information Processing Systems (NeurIPS), 2021
Arash Vahdat
Karsten Kreis
Jan Kautz
DiffM
567
833
0
10 Jun 2021
EBMs Trained with Maximum Likelihood are Generator Models Trained with a Self-adverserial Loss
Zhisheng Xiao
Qing Yan
Y. Amit
288
3
0
23 Feb 2021
1
Page 1 of 1