ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 0811.2843
71
0

An Algorithm for Unconstrained Quadratically Penalized Convex Optimization

18 November 2008
S. P. Ellis
ArXiv (abs)PDFHTML
Abstract

A descent algorithm, "Quasi-Quadratic Minimization with Memory" (QQMM), is proposed for unconstrained minimization of the sum, FFF, of a non-negative convex function, VVV, and a quadratic form. Such problems come up in regularized estimation in machine learning and statistics. In addition to values of FFF, QQMM requires the (sub)gradient of VVV. Two features of QQMM help keep low the number of evaluations of the objective function it needs. First, QQMM provides good control over stopping the iterative search. This feature makes QQMM well adapted to statistical problems because in such problems the objective function is based on random data and therefore stopping early is sensible. Secondly, QQMM uses a complex method for determining trial minimizers of FFF. After a description of the problem and algorithm a simulation study comparing QQMM to the popular BFGS optimization algorithm is described. The simulation study and other experiments suggest that QQMM is generally substantially faster than BFGS in the problem domain for which it was designed. A QQMM-BFGS hybrid is also generally substantially faster than BFGS but does better than QQMM when QQMM is very slow.

View on arXiv
Comments on this paper