ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02664
45
0

Computational Thresholds in Multi-Modal Learning via the Spiked Matrix-Tensor Model

3 June 2025
Hugo Tabanelli
Pierre Mergny
Lenka Zdeborová
Florent Krzakala
ArXiv (abs)PDFHTML
Main:10 Pages
11 Figures
Bibliography:3 Pages
Appendix:14 Pages
Abstract

We study the recovery of multiple high-dimensional signals from two noisy, correlated modalities: a spiked matrix and a spiked tensor sharing a common low-rank structure. This setting generalizes classical spiked matrix and tensor models, unveiling intricate interactions between inference channels and surprising algorithmic behaviors. Notably, while the spiked tensor model is typically intractable at low signal-to-noise ratios, its correlation with the matrix enables efficient recovery via Bayesian Approximate Message Passing, inducing staircase-like phase transitions reminiscent of neural network phenomena. In contrast, empirical risk minimization for joint learning fails: the tensor component obstructs effective matrix recovery, and joint optimization significantly degrades performance, highlighting the limitations of naive multi-modal learning. We show that a simple Sequential Curriculum Learning strategy-first recovering the matrix, then leveraging it to guide tensor recovery-resolves this bottleneck and achieves optimal weak recovery thresholds. This strategy, implementable with spectral methods, emphasizes the critical role of structural correlation and learning order in multi-modal high-dimensional inference.

View on arXiv
@article{tabanelli2025_2506.02664,
  title={ Computational Thresholds in Multi-Modal Learning via the Spiked Matrix-Tensor Model },
  author={ Hugo Tabanelli and Pierre Mergny and Lenka Zdeborova and Florent Krzakala },
  journal={arXiv preprint arXiv:2506.02664},
  year={ 2025 }
}
Comments on this paper