ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.02583
16
2
v1v2v3v4 (latest)

Efficient Alternating Least Squares Algorithms for Truncated HOSVD of Higher-Order Tensors

6 April 2020
Chuanfu Xiao
Chao Yang
Min Li
ArXiv (abs)PDFHTML
Abstract

The truncated Tucker decomposition, also known as the truncated higher-order singular value decomposition (HOSVD), has been extensively utilized as an efficient tool in many applications. Popular direct methods for truncated HOSVD often suffer from the notorious intermediate data explosion issue and are not easy to parallelize. In this paper, we propose a class of new truncated HOSVD algorithms based on alternating least squares (ALS). The proposed ALS-based approaches are able to eliminate the redundant computations of the singular vectors of intermediate matrices and are therefore free of data explosion. Also, the new methods are more flexible with adjustable convergence tolerance and are intrinsically parallelizable on high-performance computers. Theoretical analysis reveals that the ALS iteration in the proposed algorithms is q-linear convergent with a relatively wide convergence region. Numerical experiments with both synthetic and real-world tensor data demonstrate that ALS-based methods can substantially reduce the total cost of the original ones and are highly scalable for parallel computing.

View on arXiv
Comments on this paper