ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.01673
69
20
v1v2 (latest)

Learn-and-Adapt Stochastic Dual Gradients for Network Resource Allocation

5 March 2017
Tianyi Chen
Qing Ling
G. Giannakis
ArXiv (abs)PDFHTML
Abstract

Network resource allocation shows revived popularity in the era of data deluge and information explosion. Existing stochastic optimization approaches fall short in attaining a desirable cost-delay tradeoff. Recognizing the central role of Lagrange multipliers in network resource allocation, a novel learn-and-adapt stochastic dual gradient (LA-SDG) method is developed in this paper to learn the empirical optimal Lagrange multiplier from historical data, and adapt to the upcoming resource allocation strategy. Remarkably, it only requires one more sample (gradient) evaluation than the celebrated stochastic dual gradient (SDG) method. LA-SDG can be interpreted as a foresighted learning approach with an eye on the future, or, a modified heavy-ball approach from an optimization viewpoint. It is established - both theoretically and empirically - that LA-SDG markedly improves the cost-delay tradeoff over state-of-the-art allocation schemes.

View on arXiv
Comments on this paper