479

Error Estimates for the Variational Training of Neural Networks with Boundary Penalty

Mathematical and Scientific Machine Learning (MSML), 2021
Abstract

We establish estimates on the error made by the Deep Ritz Method for elliptic problems on the space H1(Ω)H^1(\Omega) with different boundary conditions. For Dirichlet boundary conditions, we estimate the error when the boundary values are approximately enforced through the boundary penalty method. Our results apply to arbitrary and in general non linear classes VH1(Ω)V\subseteq H^1(\Omega) of ansatz functions and estimate the error in dependence of the optimization accuracy, the approximation capabilities of the ansatz class and -- in the case of Dirichlet boundary values -- the penalisation strength λ\lambda. For non-essential boundary conditions the error of the Ritz method decays with the same rate as the approximation rate of the ansatz classes. For essential boundary conditions, given an approximation rate of rr in H1(Ω)H^1(\Omega) and an approximation rate of ss in L2(Ω)L^2(\partial\Omega) of the ansatz classes, the optimal decay rate of the estimated error is min(s/2,r)\min(s/2, r) and achieved by choosing λnns\lambda_n\sim n^{s}. We discuss the implications for ansatz classes which are given through ReLU networks and the relation to existing estimates for finite element functions.

View on arXiv
Comments on this paper