ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.10474
14
1

One-Bit Quantization and Sparsification for Multiclass Linear Classification via Regularized Regression

16 February 2024
Reza Ghane
D. Akhtiamov
Babak Hassibi
ArXivPDFHTML
Abstract

We study the use of linear regression for multiclass classification in the over-parametrized regime where some of the training data is mislabeled. In such scenarios it is necessary to add an explicit regularization term, λf(w)\lambda f(w)λf(w), for some convex function f(⋅)f(\cdot)f(⋅), to avoid overfitting the mislabeled data. In our analysis, we assume that the data is sampled from a Gaussian Mixture Model with equal class sizes, and that a proportion ccc of the training labels is corrupted for each class. Under these assumptions, we prove that the best classification performance is achieved when f(⋅)=∥⋅∥22f(\cdot) = \|\cdot\|^2_2f(⋅)=∥⋅∥22​ and λ→∞\lambda \to \inftyλ→∞. We then proceed to analyze the classification errors for f(⋅)=∥⋅∥1f(\cdot) = \|\cdot\|_1f(⋅)=∥⋅∥1​ and f(⋅)=∥⋅∥∞f(\cdot) = \|\cdot\|_\inftyf(⋅)=∥⋅∥∞​ in the large λ\lambdaλ regime and notice that it is often possible to find sparse and one-bit solutions, respectively, that perform almost as well as the one corresponding to f(⋅)=∥⋅∥22f(\cdot) = \|\cdot\|_2^2f(⋅)=∥⋅∥22​.

View on arXiv
Comments on this paper