ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.10775
95
0

Monotone Classification with Relative Approximations

12 June 2025
Yufei Tao
ArXiv (abs)PDFHTML
Main:38 Pages
6 Figures
Bibliography:3 Pages
2 Tables
Abstract

In monotone classification, the input is a multi-set PPP of points in Rd\mathbb{R}^dRd, each associated with a hidden label from {−1,1}\{-1, 1\}{−1,1}. The goal is to identify a monotone function hhh, which acts as a classifier, mapping from Rd\mathbb{R}^dRd to {−1,1}\{-1, 1\}{−1,1} with a small {\em error}, measured as the number of points p∈Pp \in Pp∈P whose labels differ from the function values h(p)h(p)h(p). The cost of an algorithm is defined as the number of points having their labels revealed. This article presents the first study on the lowest cost required to find a monotone classifier whose error is at most (1+ϵ)⋅k∗(1 + \epsilon) \cdot k^*(1+ϵ)⋅k∗ where ϵ≥0\epsilon \ge 0ϵ≥0 and k∗k^*k∗ is the minimum error achieved by an optimal monotone classifier -- in other words, the error is allowed to exceed the optimal by at most a relative factor. Nearly matching upper and lower bounds are presented for the full range of ϵ\epsilonϵ. All previous work on the problem can only achieve an error higher than the optimal by an absolute factor.

View on arXiv
@article{tao2025_2506.10775,
  title={ Monotone Classification with Relative Approximations },
  author={ Yufei Tao },
  journal={arXiv preprint arXiv:2506.10775},
  year={ 2025 }
}
Comments on this paper