ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.04397
54
35

Randomized and Deterministic Attention Sparsification Algorithms for Over-parameterized Feature Dimension

10 April 2023
Yichuan Deng
Sridhar Mahadevan
Zhao Song
ArXiv (abs)PDFHTML
Abstract

Large language models (LLMs) have shown their power in different areas. Attention computation, as an important subroutine of LLMs, has also attracted interests in theory. Recently the static computation and dynamic maintenance of attention matrix has been studied by [Alman and Song 2023] and [Brand, Song and Zhou 2023] from both algorithmic perspective and hardness perspective. In this work, we consider the sparsification of the attention problem. We make one simplification which is the logit matrix is symmetric. Let nnn denote the length of sentence, let ddd denote the embedding dimension. Given a matrix X∈Rn×dX \in \mathbb{R}^{n \times d}X∈Rn×d, suppose d≫nd \gg nd≫n and ∥XX⊤∥∞<r\| X X^\top \|_{\infty} < r∥XX⊤∥∞​<r with r∈(0,0.1)r \in (0,0.1)r∈(0,0.1), then we aim for finding Y∈Rn×mY \in \mathbb{R}^{n \times m}Y∈Rn×m (where m≪dm\ll dm≪d) such that \begin{align*} \| D(Y)^{-1} \exp( Y Y^\top ) - D(X)^{-1} \exp( X X^\top) \|_{\infty} \leq O(r) \end{align*} We provide two results for this problem. ∙\bullet∙ Our first result is a randomized algorithm. It runs in O~(nnz(X)+nω)\widetilde{O}(\mathrm{nnz}(X) + n^{\omega} ) O(nnz(X)+nω) time, has 1−δ1-\delta1−δ succeed probability, and chooses m=O(nlog⁡(n/δ))m = O(n \log(n/\delta))m=O(nlog(n/δ)). Here nnz(X)\mathrm{nnz}(X)nnz(X) denotes the number of non-zero entries in XXX. We use ω\omegaω to denote the exponent of matrix multiplication. Currently ω≈2.373\omega \approx 2.373ω≈2.373. ∙\bullet∙ Our second result is a deterministic algorithm. It runs in O~(min⁡{∑i∈[d]nnz(Xi)2,dnω−1}+nω+1)\widetilde{O}(\min\{\sum_{i\in[d]}\mathrm{nnz}(X_i)^2, dn^{\omega-1}\} + n^{\omega+1})O(min{∑i∈[d]​nnz(Xi​)2,dnω−1}+nω+1) time and chooses m=O(n)m = O(n)m=O(n). Here XiX_iXi​ denote the iii-th column of matrix XXX. Our main findings have the following implication for applied LLMs task: for any super large feature dimension, we can reduce it down to the size nearly linear in length of sentence.

View on arXiv
Comments on this paper