24
0

A Hybrid Reinforcement Learning Framework for Hard Latency Constrained Resource Scheduling

Abstract

In the forthcoming 6G era, extend reality (XR) has been regarded as an emerging application for ultra-reliable and low latency communications (URLLC) with new traffic characteristics and more stringent requirements. In addition to the quasi-periodical traffic in XR, burst traffic with both large frame size and random arrivals in some real world low latency communication scenarios has become the leading cause of network congestion or even collapse, and there still lacks an efficient algorithm for the resource scheduling problem under burst traffic with hard latency constraints. We propose a novel hybrid reinforcement learning framework for resource scheduling with hard latency constraints (HRL-RSHLC), which reuses polices from both old policies learned under other similar environments and domain-knowledge-based (DK) policies constructed using expert knowledge to improve the performance. The joint optimization of the policy reuse probabilities and new policy is formulated as an Markov Decision Problem (MDP), which maximizes the hard-latency constrained effective throughput (HLC-ET) of users. We prove that the proposed HRL-RSHLC can converge to KKT points with an arbitrary initial point. Simulations show that HRL-RSHLC can achieve superior performance with faster convergence speed compared to baseline algorithms.

View on arXiv
@article{zhang2025_2504.03721,
  title={ A Hybrid Reinforcement Learning Framework for Hard Latency Constrained Resource Scheduling },
  author={ Luyuan Zhang and An Liu and Kexuan Wang },
  journal={arXiv preprint arXiv:2504.03721},
  year={ 2025 }
}
Comments on this paper