ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.12986
9
22

Adam with Bandit Sampling for Deep Learning

24 October 2020
Rui Liu
Tianyi Wu
Barzan Mozafari
ArXivPDFHTML
Abstract

Adam is a widely used optimization method for training deep learning models. It computes individual adaptive learning rates for different parameters. In this paper, we propose a generalization of Adam, called Adambs, that allows us to also adapt to different training examples based on their importance in the model's convergence. To achieve this, we maintain a distribution over all examples, selecting a mini-batch in each iteration by sampling according to this distribution, which we update using a multi-armed bandit algorithm. This ensures that examples that are more beneficial to the model training are sampled with higher probabilities. We theoretically show that Adambs improves the convergence rate of Adam---O(log⁡nT)O(\sqrt{\frac{\log n}{T} })O(Tlogn​​) instead of O(nT)O(\sqrt{\frac{n}{T}})O(Tn​​) in some cases. Experiments on various models and datasets demonstrate Adambs's fast convergence in practice.

View on arXiv
Comments on this paper