ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.03519
56
0

Do ImageNet-trained models learn shortcuts? The impact of frequency shortcuts on generalization

5 March 2025
Shunxin Wang
Raymond N. J. Veldhuis
N. Strisciuglio
    VLM
ArXivPDFHTML
Abstract

Frequency shortcuts refer to specific frequency patterns that models heavily rely on for correct classification. Previous studies have shown that models trained on small image datasets often exploit such shortcuts, potentially impairing their generalization performance. However, existing methods for identifying frequency shortcuts require expensive computations and become impractical for analyzing models trained on large datasets. In this work, we propose the first approach to more efficiently analyze frequency shortcuts at a large scale. We show that both CNN and transformer models learn frequency shortcuts on ImageNet. We also expose that frequency shortcut solutions can yield good performance on out-of-distribution (OOD) test sets which largely retain texture information. However, these shortcuts, mostly aligned with texture patterns, hinder model generalization on rendition-based OOD test sets. These observations suggest that current OOD evaluations often overlook the impact of frequency shortcuts on model generalization. Future benchmarks could thus benefit from explicitly assessing and accounting for these shortcuts to build models that generalize across a broader range of OOD scenarios.

View on arXiv
@article{wang2025_2503.03519,
  title={ Do ImageNet-trained models learn shortcuts? The impact of frequency shortcuts on generalization },
  author={ Shunxin Wang and Raymond Veldhuis and Nicola Strisciuglio },
  journal={arXiv preprint arXiv:2503.03519},
  year={ 2025 }
}
Comments on this paper