44

Delaytron: Efficient Learning of Multiclass Classifiers with Delayed Bandit Feedbacks

Abstract

In this paper, we present online algorithm called {\it Delaytron} for learning multi class classifiers using delayed bandit feedbacks. The sequence of feedback delays {dt}t=1T\{d_t\}_{t=1}^T is unknown to the algorithm. At the tt-th round, the algorithm observes an example xt\mathbf{x}_t and predicts a label y~t\tilde{y}_t and receives the bandit feedback I[y~t=yt]\mathbb{I}[\tilde{y}_t=y_t] only dtd_t rounds later. When t+dt>Tt+d_t>T, we consider that the feedback for the tt-th round is missing. We show that the proposed algorithm achieves regret of O(2Kγ[T2+(2+L2R2\WF2)t=1Tdt])\mathcal{O}\left(\sqrt{\frac{2 K}{\gamma}\left[\frac{T}{2}+\left(2+\frac{L^2}{R^2\Vert \W\Vert_F^2}\right)\sum_{t=1}^Td_t\right]}\right) when the loss for each missing sample is upper bounded by LL. In the case when the loss for missing samples is not upper bounded, the regret achieved by Delaytron is O(2Kγ[T2+2t=1Tdt+MT])\mathcal{O}\left(\sqrt{\frac{2 K}{\gamma}\left[\frac{T}{2}+2\sum_{t=1}^Td_t+\vert \mathcal{M}\vert T\right]}\right) where M\mathcal{M} is the set of missing samples in TT rounds. These bounds were achieved with a constant step size which requires the knowledge of TT and t=1Tdt\sum_{t=1}^Td_t. For the case when TT and t=1Tdt\sum_{t=1}^Td_t are unknown, we use a doubling trick for online learning and proposed Adaptive Delaytron. We show that Adaptive Delaytron achieves a regret bound of O(T+t=1Tdt)\mathcal{O}\left(\sqrt{T+\sum_{t=1}^Td_t}\right). We show the effectiveness of our approach by experimenting on various datasets and comparing with state-of-the-art approaches.

View on arXiv
Comments on this paper