ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.07485
  4. Cited By
Learning Unnormalized Statistical Models via Compositional Optimization

Learning Unnormalized Statistical Models via Compositional Optimization

13 June 2023
Wei Jiang
Jiayu Qin
Lingyu Wu
Changyou Chen
Tianbao Yang
Lijun Zhang
ArXivPDFHTML

Papers citing "Learning Unnormalized Statistical Models via Compositional Optimization"

8 / 8 papers shown
Title
Projection-Free Variance Reduction Methods for Stochastic Constrained
  Multi-Level Compositional Optimization
Projection-Free Variance Reduction Methods for Stochastic Constrained Multi-Level Compositional Optimization
Wei Jiang
Sifan Yang
Wenhao Yang
Yibo Wang
Yuanyu Wan
Lijun Zhang
28
2
0
06 Jun 2024
Adaptive Variance Reduction for Stochastic Optimization under Weaker
  Assumptions
Adaptive Variance Reduction for Stochastic Optimization under Weaker Assumptions
Wei Jiang
Sifan Yang
Yibo Wang
Lijun Zhang
23
1
0
04 Jun 2024
Efficient Sign-Based Optimization: Accelerating Convergence via Variance
  Reduction
Efficient Sign-Based Optimization: Accelerating Convergence via Variance Reduction
Wei Jiang
Sifan Yang
Wenhao Yang
Lijun Zhang
18
3
0
01 Jun 2024
Hierarchical VAEs Know What They Don't Know
Hierarchical VAEs Know What They Don't Know
Jakob Drachmann Havtorn
J. Frellsen
Søren Hauberg
Lars Maaløe
DRL
30
71
0
16 Feb 2021
Solving Stochastic Compositional Optimization is Nearly as Easy as
  Solving Stochastic Optimization
Solving Stochastic Compositional Optimization is Nearly as Easy as Solving Stochastic Optimization
Tianyi Chen
Yuejiao Sun
W. Yin
44
81
0
25 Aug 2020
Training Deep Energy-Based Models with f-Divergence Minimization
Training Deep Energy-Based Models with f-Divergence Minimization
Lantao Yu
Yang Song
Jiaming Song
Stefano Ermon
171
42
0
06 Mar 2020
A Mutual Information Maximization Perspective of Language Representation
  Learning
A Mutual Information Maximization Perspective of Language Representation Learning
Lingpeng Kong
Cyprien de Masson dÁutume
Wang Ling
Lei Yu
Zihang Dai
Dani Yogatama
SSL
212
165
0
18 Oct 2019
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
119
1,194
0
16 Aug 2016
1