ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.07075
55
25
v1v2 (latest)

Boosted optimal weighted least-squares

15 December 2019
Cécile Haberstich
A. Nouy
G. Perrin
ArXiv (abs)PDFHTML
Abstract

This paper is concerned with the approximation of a function uuu in a given approximation space VmV_mVm​ of dimension mmm from evaluations of the function at nnn suitably chosen points. The aim is to construct an approximation of uuu in VmV_mVm​ which yields an error close to the best approximation error in VmV_mVm​ and using as few evaluations as possible. Classical least-squares regression, which defines a projection in VmV_mVm​ from nnn random points, usually requires a large nnn to guarantee a stable approximation and an error close to the best approximation error. This is a major drawback for applications where uuu is expensive to evaluate. One remedy is to use a weighted least squares projection using nnn samples drawn from a properly selected distribution. In this paper, we introduce a boosted weighted least-squares method which allows to ensure almost surely the stability of the weighted least squares projection with a sample size close to the interpolation regime n=mn=mn=m. It consists in sampling according to a measure associated with the optimization of a stability criterion over a collection of independent nnn-samples, and resampling according to this measure until a stability condition is satisfied. A greedy method is then proposed to remove points from the obtained sample. Quasi-optimality properties are obtained for the weighted least-squares projection, with or without the greedy procedure. The proposed method is validated on numerical examples and compared to state-of-the-art interpolation and weighted least squares methods.

View on arXiv
Comments on this paper