26
v1v2 (latest)

Infusion: Shaping Model Behavior by Editing Training Data via Influence Functions

J Rosser
Robert Kirk
Edward Grefenstette
Jakob Foerster
Laura Ruis
Main:9 Pages
17 Figures
Bibliography:3 Pages
2 Tables
Appendix:4 Pages
Abstract

Influence functions are commonly used to attribute model behavior to training documents. We explore the reverse: crafting training data that induces model behavior. Our framework, Infusion, uses scalable influence-function approximations to compute small perturbations to training documents that induce targeted changes in model behavior through parameter shifts. We evaluate Infusion on data poisoning tasks across vision and language domains. On CIFAR-10, we show that making subtle edits via Infusion to just 0.2% (100/45,000) of the training documents can be competitive with the baseline of inserting a small number of explicit behavior examples. We also find that Infusion transfers across architectures (ResNet \leftrightarrow CNN), suggesting a single poisoned corpus can affect multiple independently trained models. In preliminary language experiments, we characterize when our approach increases the probability of target behaviors and when it fails, finding it most effective at amplifying behaviors the model has already learned. Taken together, these results show that small, subtle edits to training data can systematically shape model behavior, underscoring the importance of training data interpretability for adversaries and defenders alike. We provide the code here:this https URL.

View on arXiv
Comments on this paper