19
7

Minimizing Quadratic Functions in Constant Time

Abstract

A sampling-based optimization method for quadratic functions is proposed. Our method approximately solves the following nn-dimensional quadratic minimization problem in constant time, which is independent of nn: z=minvRnv,Av+nv,diag(d)v+nb,vz^*=\min_{\mathbf{v} \in \mathbb{R}^n}\langle\mathbf{v}, A \mathbf{v}\rangle + n\langle\mathbf{v}, \mathrm{diag}(\mathbf{d})\mathbf{v}\rangle + n\langle\mathbf{b}, \mathbf{v}\rangle, where ARn×nA \in \mathbb{R}^{n \times n} is a matrix and d,bRn\mathbf{d},\mathbf{b} \in \mathbb{R}^n are vectors. Our theoretical analysis specifies the number of samples k(δ,ϵ)k(\delta, \epsilon) such that the approximated solution zz satisfies zz=O(ϵn2)|z - z^*| = O(\epsilon n^2) with probability 1δ1-\delta. The empirical performance (accuracy and runtime) is positively confirmed by numerical experiments.

View on arXiv
Comments on this paper