ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.11885
102
11

Optimization-driven Deep Reinforcement Learning for Robust Beamforming in IRS-assisted Wireless Communications

25 May 2020
Jiaye Lin
Y. Zou
Xiaoru Dong
Shimin Gong
D. Hoang
Dusit Niyato
ArXiv (abs)PDFHTML
Abstract

Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver. In this paper, we minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming. Due to uncertain channel conditions, we formulate a robust power minimization problem subject to the receiver's signal-to-noise ratio (SNR) requirement and the IRS's power budget constraint. We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences. To improve the learning performance, we derive a convex approximation as a lower bound on the robust problem, which is integrated into the DRL framework and thus promoting a novel optimization-driven deep deterministic policy gradient (DDPG) approach. In particular, when the DDPG algorithm generates a part of the action (e.g., passive beamforming), we can use the model-based convex approximation to optimize the other part (e.g., active beamforming) of the action more efficiently. Our simulation results demonstrate that the optimization-driven DDPG algorithm can improve both the learning rate and reward performance significantly compared to the conventional model-free DDPG algorithm.

View on arXiv
Comments on this paper