ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.12624
10
16

Learning the Kalman Filter with Fine-Grained Sample Complexity

30 January 2023
Xiangyuan Zhang
Bin Hu
Tamer Bacsar
ArXivPDFHTML
Abstract

We develop the first end-to-end sample complexity of model-free policy gradient (PG) methods in discrete-time infinite-horizon Kalman filtering. Specifically, we introduce the receding-horizon policy gradient (RHPG-KF) framework and demonstrate O~(ϵ−2)\tilde{\mathcal{O}}(\epsilon^{-2})O~(ϵ−2) sample complexity for RHPG-KF in learning a stabilizing filter that is ϵ\epsilonϵ-close to the optimal Kalman filter. Notably, the proposed RHPG-KF framework does not require the system to be open-loop stable nor assume any prior knowledge of a stabilizing filter. Our results shed light on applying model-free PG methods to control a linear dynamical system where the state measurements could be corrupted by statistical noises and other (possibly adversarial) disturbances.

View on arXiv
Comments on this paper