ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.07245
362
2
v1v2v3 (latest)

q-exponential family for policy optimization

International Conference on Learning Representations (ICLR), 2024
14 August 2024
Lingwei Zhu
Haseeb Shah
Han Wang
Martha White
    OffRL
ArXiv (abs)PDFHTML
Main:10 Pages
24 Figures
Bibliography:3 Pages
8 Tables
Appendix:14 Pages
Abstract

Policy optimization methods benefit from a simple and tractable policy functional, usually the Gaussian for continuous action spaces. In this paper, we consider a broader policy family that remains tractable: the qqq-exponential family. This family of policies is flexible, allowing the specification of both heavy-tailed policies (q>1q>1q>1) and light-tailed policies (q<1q<1q<1). This paper examines the interplay between qqq-exponential policies for several actor-critic algorithms conducted on both online and offline problems. We find that heavy-tailed policies are more effective in general and can consistently improve on Gaussian. In particular, we find the Student's t-distribution to be more stable than the Gaussian across settings and that a heavy-tailed qqq-Gaussian for Tsallis Advantage Weighted Actor-Critic consistently performs well in offline benchmark problems. Our code is available at \url{https://github.com/lingweizhu/qexp}.

View on arXiv
Comments on this paper