ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.05031
33
18

Analysis and algorithms for ℓp\ell_pℓp​-based semi-supervised learning on graphs

15 January 2019
Mauricio Flores
Jeff Calder
Gilad Lerman
ArXivPDFHTML
Abstract

This paper addresses theory and applications of ℓp\ell_pℓp​-based Laplacian regularization in semi-supervised learning. The graph ppp-Laplacian for p>2p>2p>2 has been proposed recently as a replacement for the standard (p=2p=2p=2) graph Laplacian in semi-supervised learning problems with very few labels, where Laplacian learning is degenerate. In the first part of the paper we prove new discrete to continuum convergence results for ppp-Laplace problems on kkk-nearest neighbor (kkk-NN) graphs, which are more commonly used in practice than random geometric graphs. Our analysis shows that, on kkk-NN graphs, the ppp-Laplacian retains information about the data distribution as p→∞p\to \inftyp→∞ and Lipschitz learning (p=∞p=\inftyp=∞) is sensitive to the data distribution. This situation can be contrasted with random geometric graphs, where the ppp-Laplacian forgets the data distribution as p→∞p\to \inftyp→∞. We also present a general framework for proving discrete to continuum convergence results in graph-based learning that only requires pointwise consistency and monotonicity. In the second part of the paper, we develop fast algorithms for solving the variational and game-theoretic ppp-Laplace equations on weighted graphs for p>2p>2p>2. We present several efficient and scalable algorithms for both formulations, and present numerical results on synthetic data indicating their convergence properties. Finally, we conduct extensive numerical experiments on the MNIST, FashionMNIST and EMNIST datasets that illustrate the effectiveness of the ppp-Laplacian formulation for semi-supervised learning with few labels. In particular, we find that Lipschitz learning (p=∞p=\inftyp=∞) performs well with very few labels on kkk-NN graphs, which experimentally validates our theoretical findings that Lipschitz learning retains information about the data distribution (the unlabeled data) on kkk-NN graphs.

View on arXiv
Comments on this paper