302
v1v2v3 (latest)

DSAC: Distributional Soft Actor-Critic for Risk-Sensitive Reinforcement Learning

Journal of Artificial Intelligence Research (JAIR), 2020
Main:16 Pages
12 Figures
Bibliography:4 Pages
7 Tables
Appendix:8 Pages
Abstract

We present Distributional Soft Actor-Critic (DSAC), a distributional reinforcement learning (RL) algorithm that combines the strengths of distributional information of accumulated rewards and entropy-driven exploration from Soft Actor-Critic (SAC) algorithm. DSAC models the randomness in both action and rewards, surpassing baseline performances on various continuous control tasks. Unlike standard approaches that solely maximize expected rewards, we propose a unified framework for risk-sensitive learning, one that optimizes the risk-related objective while balancing entropy to encourage exploration. Extensive experiments demonstrate DSAC's effectiveness in enhancing agent performances for both risk-neutral and risk-sensitive control tasks.

View on arXiv
Comments on this paper