ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.05844
13
7

Quasi-optimal hphphp-finite element refinements towards singularities via deep neural network prediction

13 September 2022
Tomasz Sluzalec
R. Grzeszczuk
Sergio Rojas
W. Dzwinel
Maciej Paszyñski
ArXivPDFHTML
Abstract

We show how to construct the deep neural network (DNN) expert to predict quasi-optimal hphphp-refinements for a given computational problem. The main idea is to train the DNN expert during executing the self-adaptive hphphp-finite element method (hphphp-FEM) algorithm and use it later to predict further hphphp refinements. For the training, we use a two-grid paradigm self-adaptive hphphp-FEM algorithm. It employs the fine mesh to provide the optimal hphphp refinements for coarse mesh elements. We aim to construct the DNN expert to identify quasi-optimal hphphp refinements of the coarse mesh elements. During the training phase, we use the direct solver to obtain the solution for the fine mesh to guide the optimal refinements over the coarse mesh element. After training, we turn off the self-adaptive hphphp-FEM algorithm and continue with quasi-optimal refinements as proposed by the DNN expert trained. We test our method on three-dimensional Fichera and two-dimensional L-shaped domain problems. We verify the convergence of the numerical accuracy with respect to the mesh size. We show that the exponential convergence delivered by the self-adaptive hphphp-FEM can be preserved if we continue refinements with a properly trained DNN expert. Thus, in this paper, we show that from the self-adaptive hphphp-FEM it is possible to train the DNN expert the location of the singularities, and continue with the selection of the quasi-optimal hphphp refinements, preserving the exponential convergence of the method.

View on arXiv
Comments on this paper