ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.14473
24
0
v1v2 (latest)

Foundation Model Insights and a Multi-Model Approach for Superior Fine-Grained One-shot Subset Selection

17 June 2025
Zhijing Wan
Zhixiang Wang
Zheng Wang
Xin Xu
Shiníchi Satoh
ArXiv (abs)PDFHTML
Main:7 Pages
10 Figures
Bibliography:3 Pages
11 Tables
Appendix:8 Pages
Abstract

One-shot subset selection serves as an effective tool to reduce deep learning training costs by identifying an informative data subset based on the information extracted by an information extractor (IE). Traditional IEs, typically pre-trained on the target dataset, are inherently dataset-dependent. Foundation models (FMs) offer a promising alternative, potentially mitigating this limitation. This work investigates two key questions: (1) Can FM-based subset selection outperform traditional IE-based methods across diverse datasets? (2) Do all FMs perform equally well as IEs for subset selection? Extensive experiments uncovered surprising insights: FMs consistently outperform traditional IEs on fine-grained datasets, whereas their advantage diminishes on coarse-grained datasets with noisy labels. Motivated by these finding, we propose RAM-APL (RAnking Mean-Accuracy of Pseudo-class Labels), a method tailored for fine-grained image datasets. RAM-APL leverages multiple FMs to enhance subset selection by exploiting their complementary strengths. Our approach achieves state-of-the-art performance on fine-grained datasets, including Oxford-IIIT Pet, Food-101, and Caltech-UCSD Birds-200-2011.

View on arXiv
@article{wan2025_2506.14473,
  title={ Foundation Model Insights and a Multi-Model Approach for Superior Fine-Grained One-shot Subset Selection },
  author={ Zhijing Wan and Zhixiang Wang and Zheng Wang and Xin Xu and Shiníchi Satoh },
  journal={arXiv preprint arXiv:2506.14473},
  year={ 2025 }
}
Comments on this paper