20
9

Solving Attention Kernel Regression Problem via Pre-conditioner

Abstract

The attention mechanism is the key to large language models, and the attention matrix serves as an algorithmic and computational bottleneck for such a scheme. In this paper, we define two problems, motivated by designing fast algorithms for proxy of attention matrix and solving regressions against them. Given an input matrix ARn×dA\in \mathbb{R}^{n\times d} with ndn\gg d and a response vector bb, we first consider the matrix exponential of the matrix AAA^\top A as a proxy, and we in turn design algorithms for two types of regression problems: minxRd(AA)jxb2\min_{x\in \mathbb{R}^d}\|(A^\top A)^jx-b\|_2 and minxRdA(AA)jxb2\min_{x\in \mathbb{R}^d}\|A(A^\top A)^jx-b\|_2 for any positive integer jj. Studying algorithms for these regressions is essential, as matrix exponential can be approximated term-by-term via these smaller problems. The second proxy is applying exponential entrywise to the Gram matrix, denoted by exp(AA)\exp(AA^\top) and solving the regression minxRnexp(AA)xb2\min_{x\in \mathbb{R}^n}\|\exp(AA^\top)x-b \|_2. We call this problem the attention kernel regression problem, as the matrix exp(AA)\exp(AA^\top) could be viewed as a kernel function with respect to AA. We design fast algorithms for these regression problems, based on sketching and preconditioning. We hope these efforts will provide an alternative perspective of studying efficient approximation of attention matrices.

View on arXiv
Comments on this paper