14
23

Oracle-Efficient Online Learning for Beyond Worst-Case Adversaries

Abstract

In this paper, we study oracle-efficient algorithms for beyond worst-case analysis of online learning. We focus on two settings. First, the smoothed analysis setting of [RST11,HRS22] where an adversary is constrained to generating samples from distributions whose density is upper bounded by 1/σ1/\sigma times the uniform density. Second, the setting of KK-hint transductive learning, where the learner is given access to KK hints per time step that are guaranteed to include the true instance. We give the first known oracle-efficient algorithms for both settings that depend only on the pseudo (or VC) dimension of the class and parameters σ\sigma and KK that capture the power of the adversary. In particular, we achieve oracle-efficient regret bounds of O~(Tdσ1) \widetilde{O} ( \sqrt{T d\sigma^{-1}} ) and O~(TdK) \widetilde{O} ( \sqrt{T dK} ) for learning real-valued functions and O(Tdσ12) O ( \sqrt{T d\sigma^{-\frac{1}{2}} } ) for learning binary-valued functions. For the smoothed analysis setting, our results give the first oracle-efficient algorithm for online learning with smoothed adversaries [HRS22]. This contrasts the computational separation between online learning with worst-case adversaries and offline learning established by [HK16]. Our algorithms also achieve improved bounds for worst-case setting with small domains. In particular, we give an oracle-efficient algorithm with regret of O(T(dX)1/2)O ( \sqrt{T(d |\mathcal{X}|)^{1/2} }), which is a refinement of the earlier O(TX)O ( \sqrt{T|\mathcal{X}|}) bound by [DS16].

View on arXiv
Comments on this paper