ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.11433
  4. Cited By
A general sample complexity analysis of vanilla policy gradient

A general sample complexity analysis of vanilla policy gradient

23 July 2021
Rui Yuan
Robert Mansel Gower
A. Lazaric
ArXivPDFHTML

Papers citing "A general sample complexity analysis of vanilla policy gradient"

44 / 44 papers shown
Title
RobotIQ: Empowering Mobile Robots with Human-Level Planning for Real-World Execution
RobotIQ: Empowering Mobile Robots with Human-Level Planning for Real-World Execution
Emmanuel K. Raptis
Athanasios Ch. Kapoutsis
Elias B. Kosmatopoulos
LM&Ro
75
0
0
18 Feb 2025
Small steps no more: Global convergence of stochastic gradient bandits for arbitrary learning rates
Small steps no more: Global convergence of stochastic gradient bandits for arbitrary learning rates
Jincheng Mei
Bo Dai
Alekh Agarwal
Sharan Vaswani
Anant Raj
Csaba Szepesvári
Dale Schuurmans
84
0
0
11 Feb 2025
A learning-based approach to stochastic optimal control under reach-avoid constraint
A learning-based approach to stochastic optimal control under reach-avoid constraint
Tingting Ni
Maryam Kamgarpour
70
0
0
21 Dec 2024
FedRLHF: A Convergence-Guaranteed Federated Framework for Privacy-Preserving and Personalized RLHF
FedRLHF: A Convergence-Guaranteed Federated Framework for Privacy-Preserving and Personalized RLHF
Flint Xiaofeng Fan
Cheston Tan
Yew-Soon Ong
Roger Wattenhofer
Wei Tsang Ooi
80
1
0
20 Dec 2024
Structure Matters: Dynamic Policy Gradient
Structure Matters: Dynamic Policy Gradient
Sara Klein
Xiangyuan Zhang
Tamer Basar
Simon Weissmann
Leif Döring
35
0
0
07 Nov 2024
On The Global Convergence Of Online RLHF With Neural Parametrization
On The Global Convergence Of Online RLHF With Neural Parametrization
Mudit Gaur
Amrit Singh Bedi
Raghu Pasupathy
Vaneet Aggarwal
21
0
0
21 Oct 2024
Loss Landscape Characterization of Neural Networks without
  Over-Parametrization
Loss Landscape Characterization of Neural Networks without Over-Parametrization
Rustem Islamov
Niccolò Ajroldi
Antonio Orvieto
Aurélien Lucchi
33
4
0
16 Oct 2024
Improved Sample Complexity for Global Convergence of Actor-Critic
  Algorithms
Improved Sample Complexity for Global Convergence of Actor-Critic Algorithms
Navdeep Kumar
Priyank Agrawal
Giorgia Ramponi
Kfir Y. Levy
Shie Mannor
23
0
0
11 Oct 2024
Towards Fast Rates for Federated and Multi-Task Reinforcement Learning
Towards Fast Rates for Federated and Multi-Task Reinforcement Learning
Feng Zhu
Robert W. Heath Jr.
Aritra Mitra
35
1
0
09 Sep 2024
Complexity of Minimizing Projected-Gradient-Dominated Functions with
  Stochastic First-order Oracles
Complexity of Minimizing Projected-Gradient-Dominated Functions with Stochastic First-order Oracles
Saeed Masiha
Saber Salehkaleybar
Niao He
Negar Kiyavash
Patrick Thiran
27
2
0
03 Aug 2024
Last-Iterate Global Convergence of Policy Gradients for Constrained
  Reinforcement Learning
Last-Iterate Global Convergence of Policy Gradients for Constrained Reinforcement Learning
Alessandro Montenegro
Marco Mussi
Matteo Papini
Alberto Maria Metelli
BDL
38
1
0
15 Jul 2024
Almost sure convergence rates of stochastic gradient methods under gradient domination
Almost sure convergence rates of stochastic gradient methods under gradient domination
Simon Weissmann
Sara Klein
Waïss Azizian
Leif Döring
26
3
0
22 May 2024
Policy Gradient with Active Importance Sampling
Policy Gradient with Active Importance Sampling
Matteo Papini
Giorgio Manganini
Alberto Maria Metelli
Marcello Restelli
OffRL
21
1
0
09 May 2024
Learning Optimal Deterministic Policies with Stochastic Policy Gradients
Learning Optimal Deterministic Policies with Stochastic Policy Gradients
Alessandro Montenegro
Marco Mussi
Alberto Maria Metelli
Matteo Papini
38
2
0
03 May 2024
Asynchronous Federated Reinforcement Learning with Policy Gradient Updates: Algorithm Design and Convergence Analysis
Asynchronous Federated Reinforcement Learning with Policy Gradient Updates: Algorithm Design and Convergence Analysis
Guangchen Lan
Dong-Jun Han
Abolfazl Hashemi
Vaneet Aggarwal
Christopher G. Brinton
122
15
0
09 Apr 2024
Global Convergence Guarantees for Federated Policy Gradient Methods with
  Adversaries
Global Convergence Guarantees for Federated Policy Gradient Methods with Adversaries
Swetha Ganesh
Jiayu Chen
Gugan Thoppe
Vaneet Aggarwal
FedML
56
1
0
15 Mar 2024
Towards Efficient Risk-Sensitive Policy Gradient: An Iteration Complexity Analysis
Towards Efficient Risk-Sensitive Policy Gradient: An Iteration Complexity Analysis
Rui Liu
Erfaun Noorani
Pratap Tokekar
John S. Baras
23
1
0
13 Mar 2024
Towards Provable Log Density Policy Gradient
Towards Provable Log Density Policy Gradient
Pulkit Katdare
Anant Joshi
Katherine Driggs-Campbell
24
0
0
03 Mar 2024
Stochastic Gradient Succeeds for Bandits
Stochastic Gradient Succeeds for Bandits
Jincheng Mei
Zixin Zhong
Bo Dai
Alekh Agarwal
Csaba Szepesvári
Dale Schuurmans
19
1
0
27 Feb 2024
On the Complexity of Finite-Sum Smooth Optimization under the
  Polyak-Łojasiewicz Condition
On the Complexity of Finite-Sum Smooth Optimization under the Polyak-Łojasiewicz Condition
Yunyan Bai
Yuxing Liu
Luo Luo
13
0
0
04 Feb 2024
On the Stochastic (Variance-Reduced) Proximal Gradient Method for
  Regularized Expected Reward Optimization
On the Stochastic (Variance-Reduced) Proximal Gradient Method for Regularized Expected Reward Optimization
Ling Liang
Haizhao Yang
14
0
0
23 Jan 2024
Global Convergence of Natural Policy Gradient with Hessian-aided
  Momentum Variance Reduction
Global Convergence of Natural Policy Gradient with Hessian-aided Momentum Variance Reduction
Jie Feng
Ke Wei
Jinchi Chen
20
1
0
02 Jan 2024
A safe exploration approach to constrained Markov decision processes
A safe exploration approach to constrained Markov decision processes
Tingting Ni
Maryam Kamgarpour
20
3
0
01 Dec 2023
On the Second-Order Convergence of Biased Policy Gradient Algorithms
On the Second-Order Convergence of Biased Policy Gradient Algorithms
Siqiao Mu
Diego Klabjan
35
2
0
05 Nov 2023
Improved Sample Complexity Analysis of Natural Policy Gradient Algorithm
  with General Parameterization for Infinite Horizon Discounted Reward Markov
  Decision Processes
Improved Sample Complexity Analysis of Natural Policy Gradient Algorithm with General Parameterization for Infinite Horizon Discounted Reward Markov Decision Processes
Washim Uddin Mondal
Vaneet Aggarwal
27
9
0
18 Oct 2023
Beyond Stationarity: Convergence Analysis of Stochastic Softmax Policy
  Gradient Methods
Beyond Stationarity: Convergence Analysis of Stochastic Softmax Policy Gradient Methods
Sara Klein
Simon Weissmann
Leif Döring
16
7
0
04 Oct 2023
A Homogenization Approach for Gradient-Dominated Stochastic Optimization
A Homogenization Approach for Gradient-Dominated Stochastic Optimization
Jiyuan Tan
Chenyu Xue
Chuwen Zhang
Qi Deng
Dongdong Ge
Yinyu Ye
23
2
0
21 Aug 2023
Fine-Tuning Language Models with Advantage-Induced Policy Alignment
Fine-Tuning Language Models with Advantage-Induced Policy Alignment
Banghua Zhu
Hiteshi Sharma
Felipe Vieira Frujeri
Shi Dong
Chenguang Zhu
Michael I. Jordan
Jiantao Jiao
OSLM
23
39
0
04 Jun 2023
Reinforcement Learning with General Utilities: Simpler Variance
  Reduction and Large State-Action Space
Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action Space
Anas Barakat
Ilyas Fatkhullin
Niao He
23
11
0
02 Jun 2023
On the Linear Convergence of Policy Gradient under Hadamard
  Parameterization
On the Linear Convergence of Policy Gradient under Hadamard Parameterization
Jiacai Liu
Jinchi Chen
Ke Wei
14
2
0
31 May 2023
Decision-Aware Actor-Critic with Function Approximation and Theoretical
  Guarantees
Decision-Aware Actor-Critic with Function Approximation and Theoretical Guarantees
Sharan Vaswani
A. Kazemi
Reza Babanezhad
Nicolas Le Roux
OffRL
13
3
0
24 May 2023
Optimal Convergence Rate for Exact Policy Mirror Descent in Discounted
  Markov Decision Processes
Optimal Convergence Rate for Exact Policy Mirror Descent in Discounted Markov Decision Processes
Emmeran Johnson
Ciara Pike-Burke
Patrick Rebeschini
26
11
0
22 Feb 2023
Stochastic Policy Gradient Methods: Improved Sample Complexity for
  Fisher-non-degenerate Policies
Stochastic Policy Gradient Methods: Improved Sample Complexity for Fisher-non-degenerate Policies
Ilyas Fatkhullin
Anas Barakat
Anastasia Kireeva
Niao He
19
37
0
03 Feb 2023
A Novel Framework for Policy Mirror Descent with General
  Parameterization and Linear Convergence
A Novel Framework for Policy Mirror Descent with General Parameterization and Linear Convergence
Carlo Alfano
Rui Yuan
Patrick Rebeschini
54
15
0
30 Jan 2023
Stochastic Dimension-reduced Second-order Methods for Policy
  Optimization
Stochastic Dimension-reduced Second-order Methods for Policy Optimization
Jinsong Liu
Chen Xie
Qinwen Deng
Dongdong Ge
Yi-Li Ye
13
1
0
28 Jan 2023
Understanding the Complexity Gains of Single-Task RL with a Curriculum
Understanding the Complexity Gains of Single-Task RL with a Curriculum
Qiyang Li
Yuexiang Zhai
Yi-An Ma
Sergey Levine
27
14
0
24 Dec 2022
On the Global Convergence of Fitted Q-Iteration with Two-layer Neural
  Network Parametrization
On the Global Convergence of Fitted Q-Iteration with Two-layer Neural Network Parametrization
Mudit Gaur
Vaneet Aggarwal
Mridul Agarwal
MLT
33
1
0
14 Nov 2022
From Gradient Flow on Population Loss to Learning with Stochastic
  Gradient Descent
From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent
Satyen Kale
Jason D. Lee
Chris De Sa
Ayush Sekhari
Karthik Sridharan
19
4
0
13 Oct 2022
Stochastic Second-Order Methods Improve Best-Known Sample Complexity of
  SGD for Gradient-Dominated Function
Stochastic Second-Order Methods Improve Best-Known Sample Complexity of SGD for Gradient-Dominated Function
Saeed Masiha
Saber Salehkaleybar
Niao He
Negar Kiyavash
Patrick Thiran
79
18
0
25 May 2022
Beyond Exact Gradients: Convergence of Stochastic Soft-Max Policy
  Gradient Methods with Entropy Regularization
Beyond Exact Gradients: Convergence of Stochastic Soft-Max Policy Gradient Methods with Entropy Regularization
Yuhao Ding
Junzi Zhang
Hyunin Lee
Javad Lavaei
24
18
0
19 Oct 2021
On the Global Optimum Convergence of Momentum-based Policy Gradient
On the Global Optimum Convergence of Momentum-based Policy Gradient
Yuhao Ding
Junzi Zhang
Javad Lavaei
19
16
0
19 Oct 2021
On the Convergence and Sample Efficiency of Variance-Reduced Policy
  Gradient Method
On the Convergence and Sample Efficiency of Variance-Reduced Policy Gradient Method
Junyu Zhang
Chengzhuo Ni
Zheng Yu
Csaba Szepesvári
Mengdi Wang
44
66
0
17 Feb 2021
On the Sample Complexity of Actor-Critic Method for Reinforcement
  Learning with Function Approximation
On the Sample Complexity of Actor-Critic Method for Reinforcement Learning with Function Approximation
Harshat Kumar
Alec Koppel
Alejandro Ribeiro
99
79
0
18 Oct 2019
Smoothing Policies and Safe Policy Gradients
Smoothing Policies and Safe Policy Gradients
Matteo Papini
Matteo Pirotta
Marcello Restelli
11
29
0
08 May 2019
1