35
0

MAGIC: Near-Optimal Data Attribution for Deep Learning

Abstract

The goal of predictive data attribution is to estimate how adding or removing a given set of training datapoints will affect model predictions. In convex settings, this goal is straightforward (i.e., via the infinitesimal jackknife). In large-scale (non-convex) settings, however, existing methods are far less successful -- current methods' estimates often only weakly correlate with ground truth. In this work, we present a new data attribution method (MAGIC) that combines classical methods and recent advances in metadifferentiation to (nearly) optimally estimate the effect of adding or removing training data on model predictions.

View on arXiv
@article{ilyas2025_2504.16430,
  title={ MAGIC: Near-Optimal Data Attribution for Deep Learning },
  author={ Andrew Ilyas and Logan Engstrom },
  journal={arXiv preprint arXiv:2504.16430},
  year={ 2025 }
}
Comments on this paper