Coreset selection targets the challenge of finding a small, representative subset of a large dataset that preserves essential patterns for effective machine learning. Although several surveys have examined data reduction strategies before, most focus narrowly on either classical geometry-based methods or active learning techniques. In contrast, this survey presents a more comprehensive view by unifying three major lines of coreset research, namely, training-free, training-oriented, and label-free approaches, into a single taxonomy. We present subfields often overlooked by existing work, including submodular formulations, bilevel optimization, and recent progress in pseudo-labeling for unlabeled datasets. Additionally, we examine how pruning strategies influence generalization and neural scaling laws, offering new insights that are absent from prior reviews. Finally, we compare these methods under varying computational, robustness, and performance demands and highlight open challenges, such as robustness, outlier filtering, and adapting coreset selection to foundation models, for future research.
View on arXiv@article{moser2025_2505.17799, title={ A Coreset Selection of Coreset Selection Literature: Introduction and Recent Advances }, author={ Brian B. Moser and Arundhati S. Shanbhag and Stanislav Frolov and Federico Raue and Joachim Folz and Andreas Dengel }, journal={arXiv preprint arXiv:2505.17799}, year={ 2025 } }