Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2211.11004
Cited By
Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation
20 November 2022
Jiawei Du
Yiding Jiang
Vincent Y. F. Tan
Joey Tianyi Zhou
Haizhou Li
DD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation"
18 / 18 papers shown
Title
When Dynamic Data Selection Meets Data Augmentation
S. M. I. Simon X. Yang
Peng Ye
F. Shen
Dongzhan Zhou
22
0
0
02 May 2025
Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios
Kai Wang
Zekai Li
Zhi-Qi Cheng
Samir Khaki
A. Sajedi
Ramakrishna Vedantam
Konstantinos N. Plataniotis
Alexander G. Hauptmann
Yang You
DD
62
4
0
22 Oct 2024
Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated Matching
Ruonan Yu
Songhua Liu
Jingwen Ye
Xinchao Wang
DD
18
4
0
10 Oct 2024
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
S. Joshi
Jiayi Ni
Baharan Mirzasoleiman
DD
65
2
0
03 Oct 2024
Distilling Long-tailed Datasets
Zhenghao Zhao
Haoxuan Wang
Yuzhang Shang
Kai Wang
Yan Yan
DD
46
2
0
24 Aug 2024
Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class Feature Compensator
Xin Zhang
Jiawei Du
Ping Liu
Joey Tianyi Zhou
DD
42
2
0
13 Aug 2024
A Label is Worth a Thousand Images in Dataset Distillation
Tian Qin
Zhiwei Deng
David Alvarez-Melis
DD
84
10
0
15 Jun 2024
ATOM: Attention Mixer for Efficient Dataset Distillation
Samir Khaki
A. Sajedi
Kai Wang
Lucy Z. Liu
Y. Lawryshyn
Konstantinos N. Plataniotis
31
3
0
02 May 2024
Navigating Complexity: Toward Lossless Graph Condensation via Expanding Window Matching
Yuchen Zhang
Tianle Zhang
Kai Wang
Ziyao Guo
Yuxuan Liang
Xavier Bresson
Wei Jin
Yang You
28
23
0
07 Feb 2024
Spanning Training Progress: Temporal Dual-Depth Scoring (TDDS) for Enhanced Dataset Pruning
Xin Zhang
Jiawei Du
Yunsong Li
Weiying Xie
Joey Tianyi Zhou
25
6
0
22 Nov 2023
AST: Effective Dataset Distillation through Alignment with Smooth and High-Quality Expert Trajectories
Jiyuan Shen
Wenzhuo Yang
Kwok-Yan Lam
DD
15
1
0
16 Oct 2023
Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching
Tao Feng
Jie Zhang
Peizheng Wang
Zhijie Wang
Shengyuan Pang
DD
46
0
0
29 May 2023
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
27
121
0
17 Jan 2023
A Comprehensive Survey of Dataset Distillation
Shiye Lei
Dacheng Tao
DD
31
87
0
13 Jan 2023
Dataset Distillation Using Parameter Pruning
Guang Li
Ren Togo
Takahiro Ogawa
Miki Haseyama
DD
31
20
0
29 Sep 2022
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks
Jiawei Du
Hanshu Yan
Jiashi Feng
Joey Tianyi Zhou
Liangli Zhen
Rick Siow Mong Goh
Vincent Y. F. Tan
AAML
102
132
0
07 Oct 2021
Dataset Condensation with Differentiable Siamese Augmentation
Bo-Lu Zhao
Hakan Bilen
DD
189
288
0
16 Feb 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,878
0
15 Sep 2016
1