16
1

Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback

Qiwei Di
Jiafan He
Quanquan Gu
Abstract

Learning from human feedback plays an important role in aligning generative models, such as large language models (LLM). However, the effectiveness of this approach can be influenced by adversaries, who may intentionally provide misleading preferences to manipulate the output in an undesirable or harmful direction. To tackle this challenge, we study a specific model within this problem domain--contextual dueling bandits with adversarial feedback, where the true preference label can be flipped by an adversary. We propose an algorithm namely robust contextual dueling bandits (RCDB), which is based on uncertainty-weighted maximum likelihood estimation. Our algorithm achieves an O~(dT/κ+dC/κ)\tilde O(d\sqrt{T}/\kappa+dC/\kappa) regret bound, where TT is the number of rounds, dd is the dimension of the context, κ\kappa is the lower bound of the derivative of the link function, and 0CT 0 \le C \le T is the total number of adversarial feedback. We also prove a lower bound to show that our regret bound is nearly optimal, both in scenarios with and without (C=0C=0) adversarial feedback. Our work is the first to achieve nearly minimax optimal regret for dueling bandits in the presence of adversarial preference feedback. Additionally, for the sigmoid link function, we develop a novel algorithm that takes into account the effect of local derivatives into maximum likelihood estimation (MLE) analysis through a refined method for estimating the link function's derivative. This method helps us to eliminate the κ\kappa dependence in the leading term with respect to TT, which reduces the exponential dependence on the parameter radius BB to a polynomial dependence.

View on arXiv
@article{di2025_2404.10776,
  title={ Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback },
  author={ Qiwei Di and Jiafan He and Quanquan Gu },
  journal={arXiv preprint arXiv:2404.10776},
  year={ 2025 }
}
Comments on this paper