131
0

GraSS: Scalable Influence Function with Sparse Gradient Compression

Abstract

Gradient-based data attribution methods, such as influence functions, are critical for understanding the impact of individual training samples without requiring repeated model retraining. However, their scalability is often limited by the high computational and memory costs associated with per-sample gradient computation. In this work, we propose GraSS, a novel gradient compression algorithm and its variants FactGraSS for linear layers specifically, that explicitly leverage the inherent sparsity of per-sample gradients to achieve sub-linear space and time complexity. Extensive experiments demonstrate the effectiveness of our approach, achieving substantial speedups while preserving data influence fidelity. In particular, FactGraSS achieves up to 165% faster throughput on billion-scale models compared to the previous state-of-the-art baselines. Our code is publicly available atthis https URL.

View on arXiv
@article{hu2025_2505.18976,
  title={ GraSS: Scalable Influence Function with Sparse Gradient Compression },
  author={ Pingbang Hu and Joseph Melkonian and Weijing Tang and Han Zhao and Jiaqi W. Ma },
  journal={arXiv preprint arXiv:2505.18976},
  year={ 2025 }
}
Comments on this paper