ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.15121
23
2
v1v2v3 (latest)

Learning linear dynamical systems under convex constraints

27 March 2023
Hemant Tyagi
D. Efimov
ArXiv (abs)PDFHTML
Abstract

We consider the problem of finite-time identification of linear dynamical systems from TTT samples of a single trajectory. Recent results have predominantly focused on the setup where no structural assumption is made on the system matrix A∗∈Rn×nA^* \in \mathbb{R}^{n \times n}A∗∈Rn×n, and have consequently analyzed the ordinary least squares (OLS) estimator in detail. We assume prior structural information on A∗A^*A∗ is available, which can be captured in the form of a convex set K\mathcal{K}K containing A∗A^*A∗. For the solution of the ensuing constrained least squares estimator, we derive non-asymptotic error bounds in the Frobenius norm that depend on the local size of K\mathcal{K}K at A∗A^*A∗. To illustrate the usefulness of these results, we instantiate them for four examples, namely when (i) A∗A^*A∗ is sparse and K\mathcal{K}K is a suitably scaled ℓ1\ell_1ℓ1​ ball; (ii) K\mathcal{K}K is a subspace; (iii) K\mathcal{K}K consists of matrices each of which is formed by sampling a bivariate convex function on a uniform n×nn \times nn×n grid (convex regression); (iv) K\mathcal{K}K consists of matrices each row of which is formed by uniform sampling (with step size 1/T1/T1/T) of a univariate Lipschitz function. In all these situations, we show that A∗A^*A∗ can be reliably estimated for values of TTT much smaller than what is needed for the unconstrained setting.

View on arXiv
Comments on this paper