40
0

Meta Learning not to Learn: Robustly Informing Meta-Learning under Nuisance-Varying Families

Abstract

In settings where both spurious and causal predictors are available, standard neural networks trained under the objective of empirical risk minimization (ERM) with no additional inductive biases tend to have a dependence on a spurious feature. As a result, it is necessary to integrate additional inductive biases in order to guide the network toward generalizable hypotheses. Often these spurious features are shared across related tasks, such as estimating disease prognoses from image scans coming from different hospitals, making the challenge of generalization more difficult. In these settings, it is important that methods are able to integrate the proper inductive biases to generalize across both nuisance-varying families as well as task families. Motivated by this setting, we present RIME (Robustly Informed Meta lEarning), a new method for meta learning under the presence of both positive and negative inductive biases (what to learn and what not to learn). We first develop a theoretical causal framework showing why existing approaches at knowledge integration can lead to worse performance on distributionally robust objectives. We then show that RIME is able to simultaneously integrate both biases, reaching state of the art performance under distributionally robust objectives in informed meta-learning settings under nuisance-varying families.

View on arXiv
@article{mcconnell2025_2503.04570,
  title={ Meta Learning not to Learn: Robustly Informing Meta-Learning under Nuisance-Varying Families },
  author={ Louis McConnell },
  journal={arXiv preprint arXiv:2503.04570},
  year={ 2025 }
}
Comments on this paper