ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.07417
17
5

Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor Decompositions

15 July 2022
Arvind V. Mahankali
David P. Woodruff
Ziyun Zhang
ArXivPDFHTML
Abstract

We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions, as well as approximations with tree tensor networks and more general tensor networks. For tensor train decomposition, we give a bicriteria (1+\eps)(1 + \eps)(1+\eps)-approximation algorithm with a small bicriteria rank and O(q⋅\nnz(A))O(q \cdot \nnz(A))O(q⋅\nnz(A)) running time, up to lower order terms, which improves over the additive error algorithm of \cite{huber2017randomized}. We also show how to convert the algorithm of \cite{huber2017randomized} into a relative error algorithm, but their algorithm necessarily has a running time of O(qr2⋅\nnz(A))+n⋅\poly(qk/\eps)O(qr^2 \cdot \nnz(A)) + n \cdot \poly(qk/\eps)O(qr2⋅\nnz(A))+n⋅\poly(qk/\eps) when converted to a (1+\eps)(1 + \eps)(1+\eps)-approximation algorithm with bicriteria rank rrr. To the best of our knowledge, our work is the first to achieve polynomial time relative error approximation for tensor train decomposition. Our key technique is a method for obtaining subspace embeddings with a number of rows polynomial in qqq for a matrix which is the flattening of a tensor train of qqq tensors. We extend our algorithm to tree tensor networks. In addition, we extend our algorithm to tensor networks with arbitrary graphs (which we refer to as general tensor networks), by using a result of \cite{ms08_simulating_quantum_tensor_contraction} and showing that a general tensor network of rank kkk can be contracted to a binary tree network of rank kO(deg⁡(G)\tw(G))k^{O(\deg(G)\tw(G))}kO(deg(G)\tw(G)), allowing us to reduce to the case of tree tensor networks. Finally, we give new fixed-parameter tractable algorithms for the tensor train, Tucker, and CP decompositions, which are simpler than those of \cite{swz19_tensor_low_rank} since they do not make use of polynomial system solvers. Our technique of Gaussian subspace embeddings with exactly kkk rows (and thus exponentially small success probability) may be of independent interest.

View on arXiv
Comments on this paper