ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17761
80
0

Structured Linear CDEs: Maximally Expressive and Parallel-in-Time Sequence Models

23 May 2025
Benjamin Walker
Lingyi Yang
Nicola Muca Cirone
C. Salvi
Terry Lyons
    AI4TS
ArXiv (abs)PDFHTML
Abstract

Structured Linear Controlled Differential Equations (SLiCEs) provide a unifying framework for sequence models with structured, input-dependent state-transition matrices that retain the maximal expressivity of dense matrices whilst being cheaper to compute. The framework encompasses existing architectures, such as input-dependent block-diagonal linear recurrent neural networks and DeltaNet's diagonal-plus-low-rank structure, as well as two novel variants based on sparsity and the Walsh--Hadamard transform. We prove that, unlike the diagonal state-transition matrices of S4 and Mamba, SLiCEs employing block-diagonal, sparse, or Walsh--Hadamard matrices match the maximal expressivity of dense matrices. Empirically, SLiCEs solve the A5A_5A5​ state-tracking benchmark with a single layer, achieve best-in-class length generalisation on regular language tasks among parallel-in-time models, and match the state-of-the-art performance of log neural controlled differential equations on six multivariate time-series classification datasets while cutting the average time per training step by a factor of twenty.

View on arXiv
@article{walker2025_2505.17761,
  title={ Structured Linear CDEs: Maximally Expressive and Parallel-in-Time Sequence Models },
  author={ Benjamin Walker and Lingyi Yang and Nicola Muca Cirone and Cristopher Salvi and Terry Lyons },
  journal={arXiv preprint arXiv:2505.17761},
  year={ 2025 }
}
Comments on this paper