ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18118
32
0

Scalable Policy Maximization Under Network Interference

23 May 2025
Aidan Gleich
Eric B. Laber
Alexander Volfovsky
ArXiv (abs)PDFHTML
Main:10 Pages
5 Figures
Bibliography:4 Pages
Appendix:1 Pages
Abstract

Many interventions, such as vaccines in clinical trials or coupons in online marketplaces, must be assigned sequentially without full knowledge of their effects. Multi-armed bandit algorithms have proven successful in such settings. However, standard independence assumptions fail when the treatment status of one individual impacts the outcomes of others, a phenomenon known as interference. We study optimal-policy learning under interference on a dynamic network. Existing approaches to this problem require repeated observations of the same fixed network and struggle to scale in sample size beyond as few as fifteen connected units -- both limit applications. We show that under common assumptions on the structure of interference, rewards become linear. This enables us to develop a scalable Thompson sampling algorithm that maximizes policy impact when a new nnn-node network is observed each round. We prove a Bayesian regret bound that is sublinear in nnn and the number of rounds. Simulation experiments show that our algorithm learns quickly and outperforms existing methods. The results close a key scalability gap between causal inference methods for interference and practical bandit algorithms, enabling policy optimization in large-scale networked systems.

View on arXiv
@article{gleich2025_2505.18118,
  title={ Scalable Policy Maximization Under Network Interference },
  author={ Aidan Gleich and Eric Laber and Alexander Volfovsky },
  journal={arXiv preprint arXiv:2505.18118},
  year={ 2025 }
}
Comments on this paper