ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.00701
39
65

Locally Differentially Private (Contextual) Bandits Learning

1 June 2020
Kai Zheng
Tianle Cai
Weiran Huang
Zhenguo Li
Liwei Wang
ArXivPDFHTML
Abstract

We study locally differentially private (LDP) bandits learning in this paper. First, we propose simple black-box reduction frameworks that can solve a large family of context-free bandits learning problems with LDP guarantee. Based on our frameworks, we can improve previous best results for private bandits learning with one-point feedback, such as private Bandits Convex Optimization, and obtain the first result for Bandits Convex Optimization (BCO) with multi-point feedback under LDP. LDP guarantee and black-box nature make our frameworks more attractive in real applications compared with previous specifically designed and relatively weaker differentially private (DP) context-free bandits algorithms. Further, we extend our (ε,δ)(\varepsilon, \delta)(ε,δ)-LDP algorithm to Generalized Linear Bandits, which enjoys a sub-linear regret O~(T3/4/ε)\tilde{O}(T^{3/4}/\varepsilon)O~(T3/4/ε) and is conjectured to be nearly optimal. Note that given the existing Ω(T)\Omega(T)Ω(T) lower bound for DP contextual linear bandits (Shariff & Sheffe, 2018), our result shows a fundamental difference between LDP and DP contextual bandits learning.

View on arXiv
Comments on this paper