ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.08982
21
0

Model-free Online Learning for the Kalman Filter: Forgetting Factor and Logarithmic Regret

13 May 2025
Jiachen Qian
Yang Zheng
    KELM
ArXivPDFHTML
Abstract

We consider the problem of online prediction for an unknown, non-explosive linear stochastic system. With a known system model, the optimal predictor is the celebrated Kalman filter. In the case of unknown systems, existing approaches based on recursive least squares and its variants may suffer from degraded performance due to the highly imbalanced nature of the regression model. This imbalance can easily lead to overfitting and thus degrade prediction accuracy. We tackle this problem by injecting an inductive bias into the regression model via {exponential forgetting}. While exponential forgetting is a common wisdom in online learning, it is typically used for re-weighting data. In contrast, our approach focuses on balancing the regression model. This achieves a better trade-off between {regression} and {regularization errors}, and simultaneously reduces the {accumulation error}. With new proof techniques, we also provide a sharper logarithmic regret bound of O(log⁡3N)O(\log^3 N)O(log3N), where NNN is the number of observations.

View on arXiv
@article{qian2025_2505.08982,
  title={ Model-free Online Learning for the Kalman Filter: Forgetting Factor and Logarithmic Regret },
  author={ Jiachen Qian and Yang Zheng },
  journal={arXiv preprint arXiv:2505.08982},
  year={ 2025 }
}
Comments on this paper