ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.12087
41
9
v1v2v3 (latest)

Recovery of sparse linear classifiers from mixture of responses

22 October 2020
V. Gandikota
A. Mazumdar
S. Pal
ArXiv (abs)PDFHTML
Abstract

In the problem of learning a mixture of linear classifiers, the aim is to learn a collection of hyperplanes from a sequence of binary responses. Each response is a result of querying with a vector and indicates the side of a randomly chosen hyperplane from the collection the query vector belongs to. This model provides a rich representation of heterogeneous data with categorical labels and has only been studied in some special settings. We look at a hitherto unstudied problem of query complexity upper bound of recovering all the hyperplanes, especially for the case when the hyperplanes are sparse. This setting is a natural generalization of the extreme quantization problem known as 1-bit compressed sensing. Suppose we have a set of ℓ\ellℓ unknown kkk-sparse vectors. We can query the set with another vector a\boldsymbol{a}a, to obtain the sign of the inner product of a\boldsymbol{a}a and a randomly chosen vector from the ℓ\ellℓ-set. How many queries are sufficient to identify all the ℓ\ellℓ unknown vectors? This question is significantly more challenging than both the basic 1-bit compressed sensing problem (i.e., ℓ=1\ell=1ℓ=1 case) and the analogous regression problem (where the value instead of the sign is provided). We provide rigorous query complexity results (with efficient algorithms) for this problem.

View on arXiv
Comments on this paper