ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.11004
  4. Cited By
Minimizing the Accumulated Trajectory Error to Improve Dataset
  Distillation

Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation

20 November 2022
Jiawei Du
Yiding Jiang
Vincent Y. F. Tan
Joey Tianyi Zhou
Haizhou Li
    DD
ArXivPDFHTML

Papers citing "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation"

18 / 18 papers shown
Title
When Dynamic Data Selection Meets Data Augmentation
When Dynamic Data Selection Meets Data Augmentation
S. M. I. Simon X. Yang
Peng Ye
F. Shen
Dongzhan Zhou
22
0
0
02 May 2025
Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios
Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios
Kai Wang
Zekai Li
Zhi-Qi Cheng
Samir Khaki
A. Sajedi
Ramakrishna Vedantam
Konstantinos N. Plataniotis
Alexander G. Hauptmann
Yang You
DD
62
4
0
22 Oct 2024
Teddy: Efficient Large-Scale Dataset Distillation via
  Taylor-Approximated Matching
Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated Matching
Ruonan Yu
Songhua Liu
Jingwen Ye
Xinchao Wang
DD
18
4
0
10 Oct 2024
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
S. Joshi
Jiayi Ni
Baharan Mirzasoleiman
DD
65
2
0
03 Oct 2024
Distilling Long-tailed Datasets
Distilling Long-tailed Datasets
Zhenghao Zhao
Haoxuan Wang
Yuzhang Shang
Kai Wang
Yan Yan
DD
46
2
0
24 Aug 2024
Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class Feature Compensator
Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class Feature Compensator
Xin Zhang
Jiawei Du
Ping Liu
Joey Tianyi Zhou
DD
42
2
0
13 Aug 2024
A Label is Worth a Thousand Images in Dataset Distillation
A Label is Worth a Thousand Images in Dataset Distillation
Tian Qin
Zhiwei Deng
David Alvarez-Melis
DD
81
10
0
15 Jun 2024
ATOM: Attention Mixer for Efficient Dataset Distillation
ATOM: Attention Mixer for Efficient Dataset Distillation
Samir Khaki
A. Sajedi
Kai Wang
Lucy Z. Liu
Y. Lawryshyn
Konstantinos N. Plataniotis
28
3
0
02 May 2024
Navigating Complexity: Toward Lossless Graph Condensation via Expanding
  Window Matching
Navigating Complexity: Toward Lossless Graph Condensation via Expanding Window Matching
Yuchen Zhang
Tianle Zhang
Kai Wang
Ziyao Guo
Yuxuan Liang
Xavier Bresson
Wei Jin
Yang You
28
23
0
07 Feb 2024
Spanning Training Progress: Temporal Dual-Depth Scoring (TDDS) for
  Enhanced Dataset Pruning
Spanning Training Progress: Temporal Dual-Depth Scoring (TDDS) for Enhanced Dataset Pruning
Xin Zhang
Jiawei Du
Yunsong Li
Weiying Xie
Joey Tianyi Zhou
25
6
0
22 Nov 2023
AST: Effective Dataset Distillation through Alignment with Smooth and
  High-Quality Expert Trajectories
AST: Effective Dataset Distillation through Alignment with Smooth and High-Quality Expert Trajectories
Jiyuan Shen
Wenzhuo Yang
Kwok-Yan Lam
DD
15
1
0
16 Oct 2023
Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching
Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching
Tao Feng
Jie Zhang
Peizheng Wang
Zhijie Wang
Shengyuan Pang
DD
46
0
0
29 May 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
27
121
0
17 Jan 2023
A Comprehensive Survey of Dataset Distillation
A Comprehensive Survey of Dataset Distillation
Shiye Lei
Dacheng Tao
DD
31
87
0
13 Jan 2023
Dataset Distillation Using Parameter Pruning
Dataset Distillation Using Parameter Pruning
Guang Li
Ren Togo
Takahiro Ogawa
Miki Haseyama
DD
25
20
0
29 Sep 2022
Efficient Sharpness-aware Minimization for Improved Training of Neural
  Networks
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks
Jiawei Du
Hanshu Yan
Jiashi Feng
Joey Tianyi Zhou
Liangli Zhen
Rick Siow Mong Goh
Vincent Y. F. Tan
AAML
99
132
0
07 Oct 2021
Dataset Condensation with Differentiable Siamese Augmentation
Dataset Condensation with Differentiable Siamese Augmentation
Bo-Lu Zhao
Hakan Bilen
DD
189
288
0
16 Feb 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,878
0
15 Sep 2016
1