ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.16771
16
8

Robust Nonparametric Regression under Poisoning Attack

26 May 2023
Puning Zhao
Z. Wan
    AAML
ArXivPDFHTML
Abstract

This paper studies robust nonparametric regression, in which an adversarial attacker can modify the values of up to qqq samples from a training dataset of size NNN. Our initial solution is an M-estimator based on Huber loss minimization. Compared with simple kernel regression, i.e. the Nadaraya-Watson estimator, this method can significantly weaken the impact of malicious samples on the regression performance. We provide the convergence rate as well as the corresponding minimax lower bound. The result shows that, with proper bandwidth selection, ℓ∞\ell_\inftyℓ∞​ error is minimax optimal. The ℓ2\ell_2ℓ2​ error is optimal with relatively small qqq, but is suboptimal with larger qqq. The reason is that this estimator is vulnerable if there are many attacked samples concentrating in a small region. To address this issue, we propose a correction method by projecting the initial estimate to the space of Lipschitz functions. The final estimate is nearly minimax optimal for arbitrary qqq, up to a ln⁡N\ln NlnN factor.

View on arXiv
Comments on this paper