ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.09063
45
12
v1v2v3v4v5v6v7 (latest)

Sparse random tensors: concentration, regularization and applications

20 November 2019
Zhixin Zhou
Yizhe Zhu
ArXiv (abs)PDFHTML
Abstract

We prove a non-asymptotic concentration inequality of sparse inhomogeneous random tensors under the spectral norm. For an order-kkk inhomogeneous random tensor TTT with sparsity pmax⁡≥clog⁡nnk−1p_{\max}\geq \frac{c\log n}{n^{k-1}}pmax​≥nk−1clogn​, we show that ∥T−ET∥=O(npmax⁡)\|T-\mathbb E T\|=O(\sqrt{np_{\max}})∥T−ET∥=O(npmax​​) with high probability. We also provide a simple way to regularize TTT such that O(npmax⁡)O(\sqrt{np_{\max}})O(npmax​​) concentration still holds down to sparsity pmax⁡>cnk−1p_{\max}>\frac{c}{n^{k-1}}pmax​>nk−1c​. Our proofs are based on the techniques of Friedman, Kahn and Szemer\'{e}di (1989), Feige and Ofek (2005), with the discrepancy theory of random hypergraphs. We also show that our concentration inequality is rate optimal in the minimax sense. We present our concentration and regularization results with three applications: (i) a randomized construction of hypergraphs of bounded degrees with good expander mixing properties, (ii) concentration of the adjacency matrices for sparse random hypergraphs, and (iii) concentration of sparsified tensors under uniform sampling.

View on arXiv
Comments on this paper