ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.05720
46
0

Pareto-Optimality, Smoothness, and Stochasticity in Learning-Augmented One-Max-Search

8 February 2025
Ziyad Benomar
Lorenzo Croissant
Vianney Perchet
Spyros Angelopoulos
ArXivPDFHTML
Abstract

One-max search is a classic problem in online decision-making, in which a trader acts on a sequence of revealed prices and accepts one of them irrevocably to maximise its profit. The problem has been studied both in probabilistic and in worst-case settings, notably through competitive analysis, and more recently in learning-augmented settings in which the trader has access to a prediction on the sequence. However, existing approaches either lack smoothness, or do not achieve optimal worst-case guarantees: they do not attain the best possible trade-off between the consistency and the robustness of the algorithm. We close this gap by presenting the first algorithm that simultaneously achieves both of these important objectives. Furthermore, we show how to leverage the obtained smoothness to provide an analysis of one-max search in stochastic learning-augmented settings which capture randomness in both the observed prices and the prediction.

View on arXiv
@article{benomar2025_2502.05720,
  title={ Pareto-Optimality, Smoothness, and Stochasticity in Learning-Augmented One-Max-Search },
  author={ Ziyad Benomar and Lorenzo Croissant and Vianney Perchet and Spyros Angelopoulos },
  journal={arXiv preprint arXiv:2502.05720},
  year={ 2025 }
}
Comments on this paper