ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.03906
373
16
v1v2v3v4v5 (latest)

Nash Convergence of Mean-Based Learning Algorithms in First-Price Auctions

The Web Conference (WWW), 2021
8 October 2021
Xiaotie Deng
Xinyan Hu
Tao Lin
Weiqiang Zheng
ArXiv (abs)PDFHTML
Main:21 Pages
6 Figures
Bibliography:5 Pages
Appendix:14 Pages
Abstract

The convergence properties of learning dynamics in repeated auctions is a timely and important question, with numerous applications in, e.g., online advertising markets. This work focuses on repeated first-price auctions where bidders with fixed values learn to bid using mean-based algorithms -- a large class of online learning algorithms that include popular no-regret algorithms such as Multiplicative Weights Update and Follow the Perturbed Leader. We completely characterize the learning dynamics of mean-based algorithms, under two notions of convergence: (1) time-average: the fraction of rounds where bidders play a Nash equilibrium converges to 1; (2) last-iterate: the mixed strategy profile of bidders converges to a Nash equilibrium. Specifically, the results depend on the number of bidders with the highest value:- If the number is at least three, the dynamics almost surely converges to a Nash equilibrium of the auction, in both time-average and last-iterate.- If the number is two, the dynamics almost surely converges to a Nash equilibrium in time-average but not necessarily last-iterate.- If the number is one, the dynamics may not converge to a Nash equilibrium in time-average or last-iterate.Our discovery opens up new possibilities in the study of the convergence of learning dynamics.

View on arXiv
Comments on this paper