21
0

Learning Power Control Protocol for In-Factory 6G Subnetworks

Abstract

In-X Subnetworks are envisioned to meet the stringent demands of short-range communication in diverse 6G use cases. In the context of In-Factory scenarios, effective power control is critical to mitigating the impact of interference resulting from potentially high subnetwork density. Existing approaches to power control in this domain have predominantly emphasized the data plane, often overlooking the impact of signaling overhead. Furthermore, prior work has typically adopted a network-centric perspective, relying on the assumption of complete and up-to-date channel state information (CSI) being readily available at the central controller. This paper introduces a novel multi-agent reinforcement learning (MARL) framework designed to enable access points to autonomously learn both signaling and power control protocols in an In-Factory Subnetwork environment. By formulating the problem as a partially observable Markov decision process (POMDP) and leveraging multi-agent proximal policy optimization (MAPPO), the proposed approach achieves significant advantages. The simulation results demonstrate that the learning-based method reduces signaling overhead by a factor of 8 while maintaining a buffer flush rate that lags the ideal "Genie" approach by only 5%.

View on arXiv
@article{uyoata2025_2505.05967,
  title={ Learning Power Control Protocol for In-Factory 6G Subnetworks },
  author={ Uyoata E. Uyoata and Gilberto Berardinelli and Ramoni Adeogun },
  journal={arXiv preprint arXiv:2505.05967},
  year={ 2025 }
}
Comments on this paper