ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.13034
  4. Cited By
Dataset Distillation with Infinitely Wide Convolutional Networks

Dataset Distillation with Infinitely Wide Convolutional Networks

27 July 2021
Timothy Nguyen
Roman Novak
Lechao Xiao
Jaehoon Lee
    DD
ArXivPDFHTML

Papers citing "Dataset Distillation with Infinitely Wide Convolutional Networks"

50 / 161 papers shown
Title
Distill Gold from Massive Ores: Efficient Dataset Distillation via
  Critical Samples Selection
Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection
Yue Xu
Yong-Lu Li
Kaitong Cui
Ziyu Wang
Cewu Lu
Yu-Wing Tai
Chi-Keung Tang
DD
33
8
0
28 May 2023
Summarizing Stream Data for Memory-Constrained Online Continual Learning
Summarizing Stream Data for Memory-Constrained Online Continual Learning
Jianyang Gu
Kai Wang
Wei Jiang
Yang You
DD
24
9
0
26 May 2023
On the Size and Approximation Error of Distilled Sets
On the Size and Approximation Error of Distilled Sets
Alaa Maalouf
M. Tukan
Noel Loo
Ramin Hasani
Mathias Lechner
Daniela Rus
DD
29
4
0
23 May 2023
A Survey on Dataset Distillation: Approaches, Applications and Future
  Directions
A Survey on Dataset Distillation: Approaches, Applications and Future Directions
Jiahui Geng
Zongxiong Chen
Yuandou Wang
Herbert Woisetschlaeger
Sonja Schimmler
Ruben Mayer
Zhiming Zhao
Chunming Rong
DD
57
26
0
03 May 2023
Generalizing Dataset Distillation via Deep Generative Prior
Generalizing Dataset Distillation via Deep Generative Prior
George Cazenavette
Tongzhou Wang
Antonio Torralba
Alexei A. Efros
Jun-Yan Zhu
DD
91
84
0
02 May 2023
TRAK: Attributing Model Behavior at Scale
TRAK: Attributing Model Behavior at Scale
Sung Min Park
Kristian Georgiev
Andrew Ilyas
Guillaume Leclerc
A. Madry
TDI
30
127
0
24 Mar 2023
Kernel Regression with Infinite-Width Neural Networks on Millions of
  Examples
Kernel Regression with Infinite-Width Neural Networks on Millions of Examples
Ben Adlam
Jaehoon Lee
Shreyas Padhy
Zachary Nado
Jasper Snoek
13
11
0
09 Mar 2023
Provable Data Subset Selection For Efficient Neural Network Training
Provable Data Subset Selection For Efficient Neural Network Training
M. Tukan
Samson Zhou
Alaa Maalouf
Daniela Rus
Vladimir Braverman
Dan Feldman
MLT
23
9
0
09 Mar 2023
InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning
InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning
Ziheng Qin
K. Wang
Zangwei Zheng
Jianyang Gu
Xiang Peng
...
Daquan Zhou
Lei Shang
Baigui Sun
Xuansong Xie
Yang You
116
46
0
08 Mar 2023
DiM: Distilling Dataset into Generative Model
DiM: Distilling Dataset into Generative Model
Kai Wang
Jianyang Gu
Daquan Zhou
Zheng Hua Zhu
Wei Jiang
Yang You
DD
45
40
0
08 Mar 2023
DREAM: Efficient Dataset Distillation by Representative Matching
DREAM: Efficient Dataset Distillation by Representative Matching
Yanqing Liu
Jianyang Gu
Kai Wang
Zheng Hua Zhu
Wei Jiang
Yang You
DD
31
76
0
28 Feb 2023
Dataset Distillation with Convexified Implicit Gradients
Dataset Distillation with Convexified Implicit Gradients
Noel Loo
Ramin Hasani
Mathias Lechner
Daniela Rus
DD
29
41
0
13 Feb 2023
Understanding Reconstruction Attacks with the Neural Tangent Kernel and
  Dataset Distillation
Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation
Noel Loo
Ramin Hasani
Mathias Lechner
Alexander Amini
Daniela Rus
DD
28
5
0
02 Feb 2023
Differentially Private Kernel Inducing Points using features from
  ScatterNets (DP-KIP-ScatterNet) for Privacy Preserving Data Distillation
Differentially Private Kernel Inducing Points using features from ScatterNets (DP-KIP-ScatterNet) for Privacy Preserving Data Distillation
Margarita Vinaroz
M. Park
DD
20
0
0
31 Jan 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
39
121
0
17 Jan 2023
A Comprehensive Survey of Dataset Distillation
A Comprehensive Survey of Dataset Distillation
Shiye Lei
Dacheng Tao
DD
31
87
0
13 Jan 2023
Data Distillation: A Survey
Data Distillation: A Survey
Noveen Sachdeva
Julian McAuley
DD
45
73
0
11 Jan 2023
Backdoor Attacks Against Dataset Distillation
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
31
27
0
03 Jan 2023
Accelerating Dataset Distillation via Model Augmentation
Accelerating Dataset Distillation via Model Augmentation
Lei Zhang
Jie M. Zhang
Bowen Lei
Subhabrata Mukherjee
Xiang Pan
Bo-Lu Zhao
Caiwen Ding
Y. Li
Dongkuan Xu
DD
34
62
0
12 Dec 2022
Decentralized Learning with Multi-Headed Distillation
Decentralized Learning with Multi-Headed Distillation
A. Zhmoginov
Mark Sandler
Nolan Miller
Gus Kristiansen
Max Vladymyrov
FedML
32
4
0
28 Nov 2022
Minimizing the Accumulated Trajectory Error to Improve Dataset
  Distillation
Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation
Jiawei Du
Yiding Jiang
Vincent Y. F. Tan
Joey Tianyi Zhou
Haizhou Li
DD
35
109
0
20 Nov 2022
Towards Robust Dataset Learning
Towards Robust Dataset Learning
Yihan Wu
Xinda Li
Florian Kerschbaum
Heng Huang
Hongyang R. Zhang
DD
OOD
41
10
0
19 Nov 2022
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
Justin Cui
Ruochen Wang
Si Si
Cho-Jui Hsieh
DD
20
129
0
19 Nov 2022
Black-box Coreset Variational Inference
Black-box Coreset Variational Inference
Dionysis Manousakas
H. Ritter
Theofanis Karaletsos
BDL
9
4
0
04 Nov 2022
Dataset Distillation via Factorization
Dataset Distillation via Factorization
Songhua Liu
Kai Wang
Xingyi Yang
Jingwen Ye
Xinchao Wang
DD
124
141
0
30 Oct 2022
Efficient Dataset Distillation Using Random Feature Approximation
Efficient Dataset Distillation Using Random Feature Approximation
Noel Loo
Ramin Hasani
Alexander Amini
Daniela Rus
DD
69
95
0
21 Oct 2022
Efficient Bi-Level Optimization for Recommendation Denoising
Efficient Bi-Level Optimization for Recommendation Denoising
Zongwei Wang
Min Gao
Wentao Li
Junliang Yu
Linxin Guo
Hongzhi Yin
11
27
0
19 Oct 2022
Compute-Efficient Deep Learning: Algorithmic Trends and Opportunities
Compute-Efficient Deep Learning: Algorithmic Trends and Opportunities
Brian Bartoldson
B. Kailkhura
Davis W. Blalock
29
47
0
13 Oct 2022
On Divergence Measures for Bayesian Pseudocoresets
On Divergence Measures for Bayesian Pseudocoresets
Balhae Kim
J. Choi
Seanie Lee
Yoonho Lee
Jung-Woo Ha
Juho Lee
DD
8
11
0
12 Oct 2022
Few-shot Backdoor Attacks via Neural Tangent Kernels
Few-shot Backdoor Attacks via Neural Tangent Kernels
J. Hayase
Sewoong Oh
30
21
0
12 Oct 2022
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
Nikolaos Tsilivis
Julia Kempe
AAML
39
16
0
11 Oct 2022
Dataset Distillation Using Parameter Pruning
Dataset Distillation Using Parameter Pruning
Guang Li
Ren Togo
Takahiro Ogawa
Miki Haseyama
DD
36
20
0
29 Sep 2022
Fast Neural Kernel Embeddings for General Activations
Fast Neural Kernel Embeddings for General Activations
Insu Han
A. Zandieh
Jaehoon Lee
Roman Novak
Lechao Xiao
Amin Karbasi
48
18
0
09 Sep 2022
Federated Learning via Decentralized Dataset Distillation in
  Resource-Constrained Edge Environments
Federated Learning via Decentralized Dataset Distillation in Resource-Constrained Edge Environments
Rui Song
Dai Liu
Da Chen
Andreas Festag
Carsten Trinitis
Martin Schulz
Alois C. Knoll
DD
FedML
23
61
0
24 Aug 2022
Dataset Condensation with Latent Space Knowledge Factorization and
  Sharing
Dataset Condensation with Latent Space Knowledge Factorization and Sharing
Haebeom Lee
Dong Bok Lee
Sung Ju Hwang
DD
21
37
0
21 Aug 2022
Open Source Vizier: Distributed Infrastructure and API for Reliable and
  Flexible Blackbox Optimization
Open Source Vizier: Distributed Infrastructure and API for Reliable and Flexible Blackbox Optimization
Xingyou Song
Sagi Perel
Chansoo Lee
Greg Kochanski
Daniel Golovin
29
26
0
27 Jul 2022
Can we achieve robustness from data alone?
Can we achieve robustness from data alone?
Nikolaos Tsilivis
Jingtong Su
Julia Kempe
OOD
DD
36
18
0
24 Jul 2022
FedDM: Iterative Distribution Matching for Communication-Efficient
  Federated Learning
FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning
Yuanhao Xiong
Ruochen Wang
Minhao Cheng
Felix X. Yu
Cho-Jui Hsieh
FedML
DD
36
82
0
20 Jul 2022
DC-BENCH: Dataset Condensation Benchmark
DC-BENCH: Dataset Condensation Benchmark
Justin Cui
Ruochen Wang
Si Si
Cho-Jui Hsieh
DD
29
72
0
20 Jul 2022
A Fast, Well-Founded Approximation to the Empirical Neural Tangent
  Kernel
A Fast, Well-Founded Approximation to the Empirical Neural Tangent Kernel
Mohamad Amin Mohamadi
Wonho Bae
Danica J. Sutherland
AAML
29
27
0
25 Jun 2022
Fast Finite Width Neural Tangent Kernel
Fast Finite Width Neural Tangent Kernel
Roman Novak
Jascha Narain Sohl-Dickstein
S. Schoenholz
AAML
18
52
0
17 Jun 2022
Condensing Graphs via One-Step Gradient Matching
Condensing Graphs via One-Step Gradient Matching
Wei Jin
Xianfeng Tang
Haoming Jiang
Zheng Li
Danqing Zhang
Jiliang Tang
Bin Ying
DD
25
98
0
15 Jun 2022
Remember the Past: Distilling Datasets into Addressable Memories for
  Neural Networks
Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks
Zhiwei Deng
Olga Russakovsky
FedML
DD
28
92
0
06 Jun 2022
Infinite Recommendation Networks: A Data-Centric Approach
Infinite Recommendation Networks: A Data-Centric Approach
Noveen Sachdeva
Mehak Preet Dhaliwal
Carole-Jean Wu
Julian McAuley
DD
31
28
0
03 Jun 2022
Dataset Distillation using Neural Feature Regression
Dataset Distillation using Neural Feature Regression
Yongchao Zhou
E. Nezhadarya
Jimmy Ba
DD
FedML
39
149
0
01 Jun 2022
Privacy for Free: How does Dataset Condensation Help Privacy?
Privacy for Free: How does Dataset Condensation Help Privacy?
Tian Dong
Bo-Lu Zhao
Lingjuan Lyu
DD
20
113
0
01 Jun 2022
Dataset Condensation via Efficient Synthetic-Data Parameterization
Dataset Condensation via Efficient Synthetic-Data Parameterization
Jang-Hyun Kim
Jinuk Kim
Seong Joon Oh
Sangdoo Yun
Hwanjun Song
Joonhyun Jeong
Jung-Woo Ha
Hyun Oh Song
DD
388
158
0
30 May 2022
Dataset Pruning: Reducing Training Data by Examining Generalization
  Influence
Dataset Pruning: Reducing Training Data by Examining Generalization Influence
Shuo Yang
Zeke Xie
Hanyu Peng
Minjing Xu
Mingming Sun
P. Li
DD
149
106
0
19 May 2022
Synthesizing Informative Training Samples with GAN
Synthesizing Informative Training Samples with GAN
Bo-Lu Zhao
Hakan Bilen
DD
21
74
0
15 Apr 2022
Information-theoretic Online Memory Selection for Continual Learning
Information-theoretic Online Memory Selection for Continual Learning
Shengyang Sun
Daniele Calandriello
Huiyi Hu
Ang Li
Michalis K. Titsias
CLL
9
49
0
10 Apr 2022
Previous
1234
Next