ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.07985
69
16

Faster Uncertainty Quantification for Inverse Problems with Conditional Normalizing Flows

15 July 2020
Ali Siahkoohi
G. Rizzuti
Philipp A. Witte
Felix J. Herrmann
    AI4CE
ArXiv (abs)PDFHTML
Abstract

In inverse problems, we often have access to data consisting of paired samples (x,y)∼pX,Y(x,y)(x,y)\sim p_{X,Y}(x,y)(x,y)∼pX,Y​(x,y) where yyy are partial observations of a physical system, and xxx represents the unknowns of the problem. Under these circumstances, we can employ supervised training to learn a solution xxx and its uncertainty from the observations yyy. We refer to this problem as the "supervised" case. However, the data y∼pY(y)y\sim p_{Y}(y)y∼pY​(y) collected at one point could be distributed differently than observations y′∼pY′(y′)y'\sim p_{Y}'(y')y′∼pY′​(y′), relevant for a current set of problems. In the context of Bayesian inference, we propose a two-step scheme, which makes use of normalizing flows and joint data to train a conditional generator qθ(x∣y)q_{\theta}(x|y)qθ​(x∣y) to approximate the target posterior density pX∣Y(x∣y)p_{X|Y}(x|y)pX∣Y​(x∣y). Additionally, this preliminary phase provides a density function qθ(x∣y)q_{\theta}(x|y)qθ​(x∣y), which can be recast as a prior for the "unsupervised" problem, e.g.~when only the observations y′∼pY′(y′)y'\sim p_{Y}'(y')y′∼pY′​(y′), a likelihood model y′∣xy'|xy′∣x, and a prior on x′x'x′ are known. We then train another invertible generator with output density qϕ′(x∣y′)q'_{\phi}(x|y')qϕ′​(x∣y′) specifically for y′y'y′, allowing us to sample from the posterior pX∣Y′(x∣y′)p_{X|Y}'(x|y')pX∣Y′​(x∣y′). We present some synthetic results that demonstrate considerable training speedup when reusing the pretrained network qθ(x∣y′)q_{\theta}(x|y')qθ​(x∣y′) as a warm start or preconditioning for approximating pX∣Y′(x∣y′)p_{X|Y}'(x|y')pX∣Y′​(x∣y′), instead of learning from scratch. This training modality can be interpreted as an instance of transfer learning. This result is particularly relevant for large-scale inverse problems that employ expensive numerical simulations.

View on arXiv
Comments on this paper