ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.11206
29
0

Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning

15 October 2024
Jingyang Li
Jiachun Pan
Vincent Y. F. Tan
Kim-Chuan Toh
Pan Zhou
    AAML
    MLT
ArXivPDFHTML
Abstract

Semi-supervised learning (SSL), exemplified by FixMatch (Sohn et al., 2020), has shown significant generalization advantages over supervised learning (SL), particularly in the context of deep neural networks (DNNs). However, it is still unclear, from a theoretical standpoint, why FixMatch-like SSL algorithms generalize better than SL on DNNs. In this work, we present the first theoretical justification for the enhanced test accuracy observed in FixMatch-like SSL applied to DNNs by taking convolutional neural networks (CNNs) on classification tasks as an example. Our theoretical analysis reveals that the semantic feature learning processes in FixMatch and SL are rather different. In particular, FixMatch learns all the discriminative features of each semantic class, while SL only randomly captures a subset of features due to the well-known lottery ticket hypothesis. Furthermore, we show that our analysis framework can be applied to other FixMatch-like SSL methods, e.g., FlexMatch, FreeMatch, Dash, and SoftMatch. Inspired by our theoretical analysis, we develop an improved variant of FixMatch, termed Semantic-Aware FixMatch (SA-FixMatch). Experimental results corroborate our theoretical findings and the enhanced generalization capability of SA-FixMatch.

View on arXiv
@article{li2025_2410.11206,
  title={ Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning },
  author={ Jingyang Li and Jiachun Pan and Vincent Y. F. Tan and Kim-Chuan Toh and Pan Zhou },
  journal={arXiv preprint arXiv:2410.11206},
  year={ 2025 }
}
Comments on this paper