ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.14845
17
7

Transfer Learning via ℓ1\ell_1ℓ1​ Regularization

26 June 2020
Masaaki Takada
Hironori Fujisawa
ArXivPDFHTML
Abstract

Machine learning algorithms typically require abundant data under a stationary environment. However, environments are nonstationary in many real-world applications. Critical issues lie in how to effectively adapt models under an ever-changing environment. We propose a method for transferring knowledge from a source domain to a target domain via ℓ1\ell_1ℓ1​ regularization. We incorporate ℓ1\ell_1ℓ1​ regularization of differences between source parameters and target parameters, in addition to an ordinary ℓ1\ell_1ℓ1​ regularization. Hence, our method yields sparsity for both the estimates themselves and changes of the estimates. The proposed method has a tight estimation error bound under a stationary environment, and the estimate remains unchanged from the source estimate under small residuals. Moreover, the estimate is consistent with the underlying function, even when the source estimate is mistaken due to nonstationarity. Empirical results demonstrate that the proposed method effectively balances stability and plasticity.

View on arXiv
Comments on this paper