ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.02147
29
3

Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement

3 October 2024
Gaurav Patel
Christopher Sandino
Behrooz Mahasseni
Ellen L. Zippi
Erdrin Azemi
Ali Moin
Juri Minxha
    TTA
    AI4TS
ArXivPDFHTML
Abstract

In this paper, we propose a framework for efficient Source-Free Domain Adaptation (SFDA) in the context of time-series, focusing on enhancing both parameter efficiency and data-sample utilization. Our approach introduces an improved paradigm for source-model preparation and target-side adaptation, aiming to enhance training efficiency during target adaptation. Specifically, we reparameterize the source model's weights in a Tucker-style decomposed manner, factorizing the model into a compact form during the source model preparation phase. During target-side adaptation, only a subset of these decomposed factors is fine-tuned, leading to significant improvements in training efficiency. We demonstrate using PAC Bayesian analysis that this selective fine-tuning strategy implicitly regularizes the adaptation process by constraining the model's learning capacity. Furthermore, this re-parameterization reduces the overall model size and enhances inference efficiency, making the approach particularly well suited for resource-constrained devices. Additionally, we demonstrate that our framework is compatible with various SFDA methods and achieves significant computational efficiency, reducing the number of fine-tuned parameters and inference overhead in terms of MACs by over 90% while maintaining model performance.

View on arXiv
@article{patel2025_2410.02147,
  title={ Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement },
  author={ Gaurav Patel and Christopher Sandino and Behrooz Mahasseni and Ellen L Zippi and Erdrin Azemi and Ali Moin and Juri Minxha },
  journal={arXiv preprint arXiv:2410.02147},
  year={ 2025 }
}
Comments on this paper