ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.10541
  4. Cited By
AST: Effective Dataset Distillation through Alignment with Smooth and
  High-Quality Expert Trajectories

AST: Effective Dataset Distillation through Alignment with Smooth and High-Quality Expert Trajectories

16 October 2023
Jiyuan Shen
Wenzhuo Yang
Kwok-Yan Lam
    DD
ArXivPDFHTML

Papers citing "AST: Effective Dataset Distillation through Alignment with Smooth and High-Quality Expert Trajectories"

6 / 6 papers shown
Title
Label-Augmented Dataset Distillation
Label-Augmented Dataset Distillation
Seoungyoon Kang
Youngsun Lim
Hyunjung Shim
DD
30
2
0
24 Sep 2024
Generalizing Dataset Distillation via Deep Generative Prior
Generalizing Dataset Distillation via Deep Generative Prior
George Cazenavette
Tongzhou Wang
Antonio Torralba
Alexei A. Efros
Jun-Yan Zhu
DD
91
84
0
02 May 2023
Dataset Distillation via Factorization
Dataset Distillation via Factorization
Songhua Liu
Kai Wang
Xingyi Yang
Jingwen Ye
Xinchao Wang
DD
124
137
0
30 Oct 2022
Dataset Condensation via Efficient Synthetic-Data Parameterization
Dataset Condensation via Efficient Synthetic-Data Parameterization
Jang-Hyun Kim
Jinuk Kim
Seong Joon Oh
Sangdoo Yun
Hwanjun Song
Joonhyun Jeong
Jung-Woo Ha
Hyun Oh Song
DD
378
155
0
30 May 2022
Dataset Condensation with Differentiable Siamese Augmentation
Dataset Condensation with Differentiable Siamese Augmentation
Bo-Lu Zhao
Hakan Bilen
DD
189
288
0
16 Feb 2021
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
243
11,568
0
09 Mar 2017
1