ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.01428
  4. Cited By
Understanding Reconstruction Attacks with the Neural Tangent Kernel and
  Dataset Distillation
v1v2 (latest)

Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation

International Conference on Learning Representations (ICLR), 2023
2 February 2023
Noel Loo
Ramin Hasani
Mathias Lechner
Alexander Amini
Daniela Rus
    DD
ArXiv (abs)PDFHTMLGithub (35370★)

Papers citing "Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation"

10 / 10 papers shown
A Law of Data Reconstruction for Random Features (and Beyond)
A Law of Data Reconstruction for Random Features (and Beyond)
Leonardo Iurada
Simone Bombari
Tatiana Tommasi
Marco Mondelli
197
0
0
26 Sep 2025
No Prior, No Leakage: Revisiting Reconstruction Attacks in Trained Neural Networks
No Prior, No Leakage: Revisiting Reconstruction Attacks in Trained Neural Networks
Yehonatan Refael
Guy Smorodinsky
Ofir Lindenbaum
Itay Safran
MIACVAAML
337
0
0
25 Sep 2025
On Reconstructing Training Data From Bayesian Posteriors and Trained Models
On Reconstructing Training Data From Bayesian Posteriors and Trained Models
George Wynne
AAML
197
0
0
24 Jul 2025
Querying Kernel Methods Suffices for Reconstructing their Training Data
Querying Kernel Methods Suffices for Reconstructing their Training Data
Daniel Barzilai
Yuval Margalit
Eitan Gronich
Gilad Yehudai
Meirav Galun
Ronen Basri
256
0
0
25 May 2025
FairDD: Fair Dataset Distillation
FairDD: Fair Dataset Distillation
Qihang Zhou
Shenhao Fang
Shibo He
Wenchao Meng
Jiming Chen
FedMLDD
644
1
0
29 Nov 2024
Slowing Down Forgetting in Continual Learning
Slowing Down Forgetting in Continual Learning
Pascal Janetzky
Tobias Schlagenhauf
Stefan Feuerriegel
CLL
475
0
0
11 Nov 2024
Not All Samples Should Be Utilized Equally: Towards Understanding and Improving Dataset Distillation
Not All Samples Should Be Utilized Equally: Towards Understanding and Improving Dataset Distillation
Shaobo Wang
Yantai Yang
Qilong Wang
Kaixin Li
Linfeng Zhang
Junchi Yan
DD
515
16
0
22 Aug 2024
State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey
State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey
Chaoyu Zhang
Shaoyu Li
AILaw
377
11
0
25 Feb 2024
How Spurious Features Are Memorized: Precise Analysis for Random and NTK
  Features
How Spurious Features Are Memorized: Precise Analysis for Random and NTK FeaturesInternational Conference on Machine Learning (ICML), 2023
Simone Bombari
Marco Mondelli
AAML
592
9
0
20 May 2023
What Neural Networks Memorize and Why: Discovering the Long Tail via
  Influence Estimation
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence EstimationNeural Information Processing Systems (NeurIPS), 2020
Vitaly Feldman
Chiyuan Zhang
TDI
720
602
0
09 Aug 2020
1
Page 1 of 1