Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.08572
Cited By
Flexible Dataset Distillation: Learn Labels Instead of Images
15 June 2020
Ondrej Bohdal
Yongxin Yang
Timothy M. Hospedales
DD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Flexible Dataset Distillation: Learn Labels Instead of Images"
33 / 33 papers shown
Title
Transferable text data distillation by trajectory matching
Rong Yao
Hailin Hu
Yifei Fu
Hanting Chen
Wenyi Fang
Fanyi Du
Kai Han
Yunhe Wang
28
0
0
14 Apr 2025
Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated Matching
Ruonan Yu
Songhua Liu
Jingwen Ye
Xinchao Wang
DD
40
4
0
10 Oct 2024
A Label is Worth a Thousand Images in Dataset Distillation
Tian Qin
Zhiwei Deng
David Alvarez-Melis
DD
94
10
0
15 Jun 2024
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero Cost
Xinyi Shang
Peng Sun
Tao Lin
53
2
0
23 May 2024
ATOM: Attention Mixer for Efficient Dataset Distillation
Samir Khaki
A. Sajedi
Kai Wang
Lucy Z. Liu
Y. Lawryshyn
Konstantinos N. Plataniotis
50
3
0
02 May 2024
DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation
Aru Maekawa
Satoshi Kosugi
Kotaro Funakoshi
Manabu Okumura
DD
49
10
0
30 Mar 2024
DD-RobustBench: An Adversarial Robustness Benchmark for Dataset Distillation
Yifan Wu
Jiawei Du
Ping Liu
Yuewei Lin
Wenqing Cheng
Wei-ping Xu
DD
AAML
40
5
0
20 Mar 2024
AST: Effective Dataset Distillation through Alignment with Smooth and High-Quality Expert Trajectories
Jiyuan Shen
Wenzhuo Yang
Kwok-Yan Lam
DD
35
1
0
16 Oct 2023
DataDAM: Efficient Dataset Distillation with Attention Matching
A. Sajedi
Samir Khaki
Ehsan Amjadian
Lucy Z. Liu
Y. Lawryshyn
Konstantinos N. Plataniotis
DD
46
59
0
29 Sep 2023
Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching
Tao Feng
Jie Zhang
Peizheng Wang
Zhijie Wang
Shengyuan Pang
DD
53
0
0
29 May 2023
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
Patrik Okanovic
R. Waleffe
Vasilis Mageirakos
Konstantinos E. Nikolakakis
Amin Karbasi
Dionysis Kalogerias
Nezihe Merve Gürel
Theodoros Rekatsinas
DD
50
12
0
28 May 2023
Provable Data Subset Selection For Efficient Neural Network Training
M. Tukan
Samson Zhou
Alaa Maalouf
Daniela Rus
Vladimir Braverman
Dan Feldman
MLT
28
9
0
09 Mar 2023
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
53
121
0
17 Jan 2023
A Comprehensive Survey of Dataset Distillation
Shiye Lei
Dacheng Tao
DD
36
88
0
13 Jan 2023
Data Distillation: A Survey
Noveen Sachdeva
Julian McAuley
DD
45
73
0
11 Jan 2023
Accelerating Dataset Distillation via Model Augmentation
Lei Zhang
Jie M. Zhang
Bowen Lei
Subhabrata Mukherjee
Xiang Pan
Bo Zhao
Caiwen Ding
Heng Chang
Dongkuan Xu
DD
43
62
0
12 Dec 2022
Towards Robust Dataset Learning
Yihan Wu
Xinda Li
Florian Kerschbaum
Heng Huang
Hongyang R. Zhang
DD
OOD
49
10
0
19 Nov 2022
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
Justin Cui
Ruochen Wang
Si Si
Cho-Jui Hsieh
DD
34
129
0
19 Nov 2022
Dataset Distillation via Factorization
Songhua Liu
Kai Wang
Xingyi Yang
Jingwen Ye
Xinchao Wang
DD
132
141
0
30 Oct 2022
Dataset Distillation using Neural Feature Regression
Yongchao Zhou
E. Nezhadarya
Jimmy Ba
DD
FedML
53
151
0
01 Jun 2022
Privacy for Free: How does Dataset Condensation Help Privacy?
Tian Dong
Bo Zhao
Lingjuan Lyu
DD
24
113
0
01 Jun 2022
Synthesizing Informative Training Samples with GAN
Bo Zhao
Hakan Bilen
DD
37
75
0
15 Apr 2022
Dataset Distillation by Matching Training Trajectories
George Cazenavette
Tongzhou Wang
Antonio Torralba
Alexei A. Efros
Jun-Yan Zhu
FedML
DD
78
366
0
22 Mar 2022
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from a Single Image
Yuki M. Asano
Aaqib Saeed
43
7
0
01 Dec 2021
Graph Condensation for Graph Neural Networks
Wei Jin
Lingxiao Zhao
Shichang Zhang
Yozen Liu
Jiliang Tang
Neil Shah
DD
AI4CE
27
148
0
14 Oct 2021
Dataset Distillation with Infinitely Wide Convolutional Networks
Timothy Nguyen
Roman Novak
Lechao Xiao
Jaehoon Lee
DD
51
229
0
27 Jul 2021
Meta-Calibration: Learning of Model Calibration Using Differentiable Expected Calibration Error
Ondrej Bohdal
Yongxin Yang
Timothy M. Hospedales
UQCV
OOD
43
21
0
17 Jun 2021
Data Distillation for Text Classification
Yongqi Li
Wenjie Li
DD
19
28
0
17 Apr 2021
Dataset Meta-Learning from Kernel Ridge-Regression
Timothy Nguyen
Zhourung Chen
Jaehoon Lee
DD
36
240
0
30 Oct 2020
Dataset Condensation with Gradient Matching
Bo Zhao
Konda Reddy Mopuri
Hakan Bilen
DD
36
472
0
10 Jun 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,843
0
09 Jun 2020
Meta-Learning in Neural Networks: A Survey
Timothy M. Hospedales
Antreas Antoniou
P. Micaelli
Amos Storkey
OOD
79
1,935
0
11 Apr 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
383
11,700
0
09 Mar 2017
1