102

Sequential sampling for optimal weighted least squares approximations in hierarchical spaces

Abstract

We consider the problem of approximating an unknown function uL2(D,ρ)u\in L^2(D,\rho) from its evaluations at given sampling points x1,,xnDx^1,\dots,x^n\in D, where DRdD\subset \mathbb{R}^d is a general domain and ρ\rho is a probability measure. The approximation is picked in a linear space VmV_m where m=dim(Vm)m=\dim(V_m) and computed by a weighted least squares method. Recent results show the advantages of picking the sampling points at random according to a well-chosen probability measure μ\mu that depends both on VmV_m and ρ\rho. With such a random design, the weighted least squares approximation is proved to be stable with high probability, and having precision comparable to that of the exact L2(D,ρ)L^2(D,\rho)-orthonormal projection onto VmV_m, in a near-linear sampling regime nmlogmn\sim{m\log m}. The present paper is motivated by the adaptive approximation context, in which one typically generates a nested sequence of spaces (Vm)m1(V_m)_{m\geq1} with increasing dimension. Although the measure μ=μm\mu=\mu_m changes with VmV_m, it is possible to recycle the previously generated samples by interpreting μm\mu_m as a mixture between μm1\mu_{m-1} and an update measure σm\sigma_m. Based on this observation, we discuss sequential sampling algorithms that maintain the stability and approximation properties uniformly over all spaces VmV_m. Our main result is that the total number of computed sample at step mm remains of the order mlogmm\log{m} with high probability. Numerical experiments confirm this analysis.

View on arXiv
Comments on this paper