ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.12481
23
1

Understanding Optimal Feature Transfer via a Fine-Grained Bias-Variance Analysis

18 April 2024
Yufan Li
Subhabrata Sen
Ben Adlam
    MLT
ArXivPDFHTML
Abstract

In the transfer learning paradigm models learn useful representations (or features) during a data-rich pretraining stage, and then use the pretrained representation to improve model performance on data-scarce downstream tasks. In this work, we explore transfer learning with the goal of optimizing downstream performance. We introduce a simple linear model that takes as input an arbitrary pretrained feature transform. We derive exact asymptotics of the downstream risk and its \textit{fine-grained} bias-variance decomposition. We then identify the pretrained representation that optimizes the asymptotic downstream bias and variance averaged over an ensemble of downstream tasks. Our theoretical and empirical analysis uncovers the surprising phenomenon that the optimal featurization is naturally sparse, even in the absence of explicit sparsity-inducing priors or penalties. Additionally, we identify a phase transition where the optimal pretrained representation shifts from hard selection to soft selection of relevant features.

View on arXiv
@article{li2025_2404.12481,
  title={ Understanding Optimal Feature Transfer via a Fine-Grained Bias-Variance Analysis },
  author={ Yufan Li and Subhabrata Sen and Ben Adlam },
  journal={arXiv preprint arXiv:2404.12481},
  year={ 2025 }
}
Comments on this paper