Oracle-Efficient Online Learning for Beyond Worst-Case Adversaries

In this paper, we study oracle-efficient algorithms for beyond worst-case analysis of online learning. We focus on two settings. First, the smoothed analysis setting of [RST11,HRS22] where an adversary is constrained to generating samples from distributions whose density is upper bounded by times the uniform density. Second, the setting of -hint transductive learning, where the learner is given access to hints per time step that are guaranteed to include the true instance. We give the first known oracle-efficient algorithms for both settings that depend only on the pseudo (or VC) dimension of the class and parameters and that capture the power of the adversary. In particular, we achieve oracle-efficient regret bounds of and for learning real-valued functions and for learning binary-valued functions. For the smoothed analysis setting, our results give the first oracle-efficient algorithm for online learning with smoothed adversaries [HRS22]. This contrasts the computational separation between online learning with worst-case adversaries and offline learning established by [HK16]. Our algorithms also achieve improved bounds for worst-case setting with small domains. In particular, we give an oracle-efficient algorithm with regret of , which is a refinement of the earlier bound by [DS16].
View on arXiv