ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.14043
6
8

The Teaching Dimension of Kernel Perceptron

27 October 2020
Akash Kumar
Hanqi Zhang
Adish Singla
Yuxin Chen
ArXivPDFHTML
Abstract

Algorithmic machine teaching has been studied under the linear setting where exact teaching is possible. However, little is known for teaching nonlinear learners. Here, we establish the sample complexity of teaching, aka teaching dimension, for kernelized perceptrons for different families of feature maps. As a warm-up, we show that the teaching complexity is Θ(d)\Theta(d)Θ(d) for the exact teaching of linear perceptrons in Rd\mathbb{R}^dRd, and Θ(dk)\Theta(d^k)Θ(dk) for kernel perceptron with a polynomial kernel of order kkk. Furthermore, under certain smooth assumptions on the data distribution, we establish a rigorous bound on the complexity for approximately teaching a Gaussian kernel perceptron. We provide numerical examples of the optimal (approximate) teaching set under several canonical settings for linear, polynomial and Gaussian kernel perceptrons.

View on arXiv
Comments on this paper