ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.03725
70
26
v1v2 (latest)

Tensor Regression Using Low-rank and Sparse Tucker Decompositions

9 November 2019
Talal Ahmed
Haroon Raja
W. Bajwa
ArXiv (abs)PDFHTML
Abstract

This paper studies a tensor-structured linear regression model with a scalar response variable and tensor-structured predictors, such that the regression parameters form a tensor of order ddd (i.e., a ddd-fold multiway array) in Rn1×n2×⋯×nd\mathbb{R}^{n_1 \times n_2 \times \cdots \times n_d}Rn1​×n2​×⋯×nd​. It focuses on the task of estimating the regression tensor from mmm realizations of the response variable and the predictors where m≪n=∏inim\ll n = \prod \nolimits_{i} n_im≪n=∏i​ni​. Despite the seeming ill-posedness of this problem, it can still be solved if the parameter tensor belongs to the space of sparse, low Tucker-rank tensors. Accordingly, the estimation procedure is posed as a non-convex optimization program over the space of sparse, low Tucker-rank tensors, and a tensor variant of projected gradient descent is proposed to solve the resulting non-convex problem. In addition, mathematical guarantees are provided that establish the proposed method linearly converges to an appropriate solution under a certain set of conditions. Further, an upper bound on sample complexity of tensor parameter estimation for the model under consideration is characterized for the special case when the individual (scalar) predictors independently draw values from a sub-Gaussian distribution. The sample complexity bound is shown to have a polylogarithmic dependence on nˉ=max⁡{ni:i∈{1,2,…,d}}\bar{n} = \max \big\{n_i: i\in \{1,2,\ldots,d \} \big\}nˉ=max{ni​:i∈{1,2,…,d}} and, orderwise, it matches the bound one can obtain from a heuristic parameter counting argument. Finally, numerical experiments demonstrate the efficacy of the proposed tensor model and estimation method on a synthetic dataset and a collection of neuroimaging datasets pertaining to attention deficit hyperactivity disorder. Specifically, the proposed method exhibits better sample complexities on both synthetic and real datasets, demonstrating the usefulness of the model and the method in settings where n≫mn \gg mn≫m.

View on arXiv
Comments on this paper