ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.15394
185
0

Hybrid Least Squares/Gradient Descent Methods for DeepONets

21 August 2025
Jun Choi
Chang-Ock Lee
Minam Moon
ArXiv (abs)PDFHTML
Main:22 Pages
10 Figures
Bibliography:3 Pages
2 Tables
Abstract

We propose an efficient hybrid least squares/gradient descent method to accelerate DeepONet training. Since the output of DeepONet can be viewed as linear with respect to the last layer parameters of the branch network, these parameters can be optimized using a least squares (LS) solve, and the remaining hidden layer parameters are updated by means of gradient descent form. However, building the LS system for all possible combinations of branch and trunk inputs yields a prohibitively large linear problem that is infeasible to solve directly. To address this issue, our method decomposes the large LS system into two smaller, more manageable subproblems \unicodex2014\unicode{x2014}\unicodex2014 one for the branch network and one for the trunk network \unicodex2014\unicode{x2014}\unicodex2014 and solves them separately. This method is generalized to a broader type of L2L^2L2 loss with a regularization term for the last layer parameters, including the case of unsupervised learning with physics-informed loss.

View on arXiv
Comments on this paper