ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.11848
687
346
v1v2v3v4v5 (latest)

How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks

International Conference on Learning Representations (ICLR), 2020
24 September 2020
Keyulu Xu
Mozhi Zhang
Jingling Li
S. Du
Ken-ichi Kawarabayashi
Stefanie Jegelka
    MLT
ArXiv (abs)PDFHTML
Abstract

We study how neural networks trained by gradient descent extrapolate, i.e., what they learn outside the support of training distribution. Previous works report mixed empirical results when extrapolating with neural networks: while multilayer perceptrons (MLPs) do not extrapolate well in simple tasks, Graph Neural Networks (GNNs), a structured network with MLP modules, have some success in more complex tasks. We provide a theoretical explanation and identify conditions under which MLPs and GNNs extrapolate well. We start by showing ReLU MLPs trained by gradient descent converge quickly to linear functions along any direction from the origin, which suggests ReLU MLPs cannot extrapolate well in most non-linear tasks. On the other hand, ReLU MLPs can provably converge to a linear target function when the training distribution is "diverse" enough. These observations lead to a hypothesis: GNNs can extrapolate well in dynamic programming (DP) tasks if we encode appropriate non-linearity in the architecture and input representation. We provide theoretical and empirical support for the hypothesis. Our theory explains previous extrapolation success and suggest their limitations: successful extrapolation relies on incorporating task-specific non-linearity, which often requires domain knowledge or extensive model search.

View on arXiv
Comments on this paper