ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.09803
11
0

LatticeVision: Image to Image Networks for Modeling Non-Stationary Spatial Data

14 May 2025
Antony Sikorski
Michael I. Ivanitskiy
Nathan Lenssen
Douglas Nychka
Daniel McKenzie
    DiffM
ArXivPDFHTML
Abstract

In many scientific and industrial applications, we are given a handful of instances (a 'small ensemble') of a spatially distributed quantity (a 'field') but would like to acquire many more. For example, a large ensemble of global temperature sensitivity fields from a climate model can help farmers, insurers, and governments plan appropriately. When acquiring more data is prohibitively expensive -- as is the case with climate models -- statistical emulation offers an efficient alternative for simulating synthetic yet realistic fields. However, parameter inference using maximum likelihood estimation (MLE) is computationally prohibitive, especially for large, non-stationary fields. Thus, many recent works train neural networks to estimate parameters given spatial fields as input, sidestepping MLE completely. In this work we focus on a popular class of parametric, spatially autoregressive (SAR) models. We make a simple yet impactful observation; because the SAR parameters can be arranged on a regular grid, both inputs (spatial fields) and outputs (model parameters) can be viewed as images. Using this insight, we demonstrate that image-to-image (I2I) networks enable faster and more accurate parameter estimation for a class of non-stationary SAR models with unprecedented complexity.

View on arXiv
@article{sikorski2025_2505.09803,
  title={ LatticeVision: Image to Image Networks for Modeling Non-Stationary Spatial Data },
  author={ Antony Sikorski and Michael Ivanitskiy and Nathan Lenssen and Douglas Nychka and Daniel McKenzie },
  journal={arXiv preprint arXiv:2505.09803},
  year={ 2025 }
}
Comments on this paper