ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.07541
  4. Cited By
Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for
  Smooth Minimax Games

Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games

13 June 2020
A. Suggala
Praneeth Netrapalli
ArXiv (abs)PDFHTML

Papers citing "Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games"

8 / 8 papers shown
Title
Revisiting Follow-the-Perturbed-Leader with Unbounded Perturbations in Bandit Problems
Revisiting Follow-the-Perturbed-Leader with Unbounded Perturbations in Bandit Problems
Jongyeong Lee
Junya Honda
Shinji Ito
Min-hwan Oh
12
0
0
26 Aug 2025
Hybrid Real- and Complex-valued Neural Network Architecture
Hybrid Real- and Complex-valued Neural Network Architecture
Alex Young
L. V. Fiorio
Bo Yang
B. Karanov
Wim J. van Houtum
Ronald M. Aarts
125
0
0
04 Apr 2025
Efficient Learning in Polyhedral Games via Best Response Oracles
Efficient Learning in Polyhedral Games via Best Response Oracles
Darshan Chakrabarti
Gabriele Farina
Christian Kroer
88
5
0
06 Dec 2023
An Improved Relaxation for Oracle-Efficient Adversarial Contextual
  Bandits
An Improved Relaxation for Oracle-Efficient Adversarial Contextual Bandits
Kiarash Banihashem
Mohammadtaghi Hajiaghayi
Suho Shin
Max Springer
140
2
0
29 Oct 2023
Towards Optimal Randomized Strategies in Adversarial Example Game
Towards Optimal Randomized Strategies in Adversarial Example Game
Jiahao Xie
Chao Zhang
Weijie Liu
Wensong Bai
Hui Qian
AAML
86
0
0
29 Jun 2023
Optimistic No-regret Algorithms for Discrete Caching
Optimistic No-regret Algorithms for Discrete Caching
N. Mhaisen
Abhishek Sinha
G. Paschos
Georgios Iosifidis
121
13
0
15 Aug 2022
Some performance considerations when using multi-armed bandit algorithms
  in the presence of missing data
Some performance considerations when using multi-armed bandit algorithms in the presence of missing data
Xijin Chen
K. M. Lee
S. Villar
D. Robertson
133
2
0
08 May 2022
Training for the Future: A Simple Gradient Interpolation Loss to
  Generalize Along Time
Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time
Anshul Nasery
Soumyadeep Thakur
Vihari Piratla
A. De
Sunita Sarawagi
AI4TS
94
33
0
15 Aug 2021
1