ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.14339
14
2

Distributed Link Sparsification for Scalable Scheduling Using Graph Neural Networks

27 March 2022
Zhongyuan Zhao
A. Swami
Santiago Segarra
    GNN
ArXivPDFHTML
Abstract

Distributed scheduling algorithms for throughput or utility maximization in dense wireless multi-hop networks can have overwhelmingly high overhead, causing increased congestion, energy consumption, radio footprint, and security vulnerability. For wireless networks with dense connectivity, we propose a distributed scheme for link sparsification with graph convolutional networks (GCNs), which can reduce the scheduling overhead while keeping most of the network capacity. In a nutshell, a trainable GCN module generates node embeddings as topology-aware and reusable parameters for a local decision mechanism, based on which a link can withdraw itself from the scheduling contention if it is not likely to win. In medium-sized wireless networks, our proposed sparse scheduler beats classical threshold-based sparsification policies by retaining almost 70%70\%70% of the total capacity achieved by a distributed greedy max-weight scheduler with 0.4%0.4\%0.4% of the point-to-point message complexity and 2.6%2.6\%2.6% of the average number of interfering neighbors per link.

View on arXiv
Comments on this paper