ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.01477
19
0

Compatible Gradient Approximations for Actor-Critic Algorithms

2 September 2024
Baturay Saglam
Dionysis Kalogerias
ArXivPDFHTML
Abstract

Deterministic policy gradient algorithms are foundational for actor-critic methods in controlling continuous systems, yet they often encounter inaccuracies due to their dependence on the derivative of the critic's value estimates with respect to input actions. This reliance requires precise action-value gradient computations, a task that proves challenging under function approximation. We introduce an actor-critic algorithm that bypasses the need for such precision by employing a zeroth-order approximation of the action-value gradient through two-point stochastic gradient estimation within the action space. This approach provably and effectively addresses compatibility issues inherent in deterministic policy gradient schemes. Empirical results further demonstrate that our algorithm not only matches but frequently exceeds the performance of current state-of-the-art methods by a substantial extent.

View on arXiv
@article{saglam2025_2409.01477,
  title={ Compatible Gradient Approximations for Actor-Critic Algorithms },
  author={ Baturay Saglam and Dionysis Kalogerias },
  journal={arXiv preprint arXiv:2409.01477},
  year={ 2025 }
}
Comments on this paper