LoRIF: Low-Rank Influence Functions for Scalable Training Data Attribution
- TDI
Training data attribution (TDA) identifies which training examples most influenced a model's prediction. The best-performing TDA methods exploits gradients to define an influence function. To overcome the scalability challenge arising from gradient computation, the most popular strategy is random projection (e.g., TRAK, LoGRA). However, this still faces two bottlenecks when scaling to large training sets and high-quality attribution: \emph{(i)} storing and loading projected per-example gradients for all training examples, where query latency is dominated by I/O; and \emph{(ii)} forming the inverse Hessian approximation, which costs memory. Both bottlenecks scale with the projection dimension , yet increasing is necessary for attribution quality -- creating a quality--scalability tradeoff. We introduce \textbf{LoRIF (Low-Rank Influence Functions)}, which exploits low-rank structures of gradient to address both bottlenecks. First, we store rank- factors of the projected per-example gradients rather than full matrices, reducing storage and query-time I/O from to per layer per sample. Second, we use truncated SVD with the Woodbury identity to approximate the Hessian term in an -dimensional subspace, reducing memory from to . On models from 0.1B to 70B parameters trained on datasets with millions of examples, LoRIF achieves up to 20 storage reduction and query-time speedup compared to LoGRA, while matching or exceeding its attribution quality. LoRIF makes gradient-based TDA practical at frontier scale.
View on arXiv