ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.14889
105
0
v1v2 (latest)

On Rate-Optimal Partitioning Classification from Observable and from Privatised Data

22 December 2023
Balázs Csanád Csáji
László Gyorfi
Ambrus Tamás
Harro Walk
ArXiv (abs)PDFHTML
Abstract

In this paper we revisit the classical method of partitioning classification and study its convergence rate under relaxed conditions, both for observable (non-privatised) and for privatised data. Let the feature vector XXX take values in Rd\mathbb{R}^dRd and denote its label by YYY. Previous results on the partitioning classifier worked with the strong density assumption, which is restrictive, as we demonstrate through simple examples. We assume that the distribution of XXX is a mixture of an absolutely continuous and a discrete distribution, such that the absolutely continuous component is concentrated to a dad_ada​ dimensional subspace. Here, we study the problem under much milder assumptions: in addition to the standard Lipschitz and margin conditions, a novel characteristic of the absolutely continuous component is introduced, by which the exact convergence rate of the classification error probability is calculated, both for the binary and for the multi-label cases. Interestingly, this rate of convergence depends only on the intrinsic dimension dad_ada​. The privacy constraints mean that the data (X1,Y1),…,(Xn,Yn)(X_1,Y_1), \dots ,(X_n,Y_n)(X1​,Y1​),…,(Xn​,Yn​) cannot be directly observed, and the classifiers are functions of the randomised outcome of a suitable local differential privacy mechanism. The statistician is free to choose the form of this privacy mechanism, and here we add Laplace distributed noises to the discontinuations of all possible locations of the feature vector XiX_iXi​ and to its label YiY_iYi​. Again, tight upper bounds on the rate of convergence of the classification error probability are derived, without the strong density assumption, such that this rate depends on 2 da2\,d_a2da​.

View on arXiv
Comments on this paper