ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.17193
49
4

Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios

22 October 2024
Kai Wang
Zekai Li
Zhi-Qi Cheng
Samir Khaki
A. Sajedi
Ramakrishna Vedantam
Konstantinos N. Plataniotis
Alexander G. Hauptmann
Yang You
    DD
ArXivPDFHTML
Abstract

Dataset distillation has demonstrated strong performance on simple datasets like CIFAR, MNIST, and TinyImageNet but struggles to achieve similar results in more complex scenarios. In this paper, we propose EDF (emphasizes the discriminative features), a dataset distillation method that enhances key discriminative regions in synthetic images using Grad-CAM activation maps. Our approach is inspired by a key observation: in simple datasets, high-activation areas typically occupy most of the image, whereas in complex scenarios, the size of these areas is much smaller. Unlike previous methods that treat all pixels equally when synthesizing images, EDF uses Grad-CAM activation maps to enhance high-activation areas. From a supervision perspective, we downplay supervision signals that have lower losses, as they contain common patterns. Additionally, to help the DD community better explore complex scenarios, we build the Complex Dataset Distillation (Comp-DD) benchmark by meticulously selecting sixteen subsets, eight easy and eight hard, from ImageNet-1K. In particular, EDF consistently outperforms SOTA results in complex scenarios, such as ImageNet-1K subsets. Hopefully, more researchers will be inspired and encouraged to improve the practicality and efficacy of DD. Our code and benchmark will be made public atthis https URL.

View on arXiv
@article{wang2025_2410.17193,
  title={ Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios },
  author={ Kai Wang and Zekai Li and Zhi-Qi Cheng and Samir Khaki and Ahmad Sajedi and Ramakrishna Vedantam and Konstantinos N Plataniotis and Alexander Hauptmann and Yang You },
  journal={arXiv preprint arXiv:2410.17193},
  year={ 2025 }
}
Comments on this paper