ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.01900
24
1

Online non-parametric likelihood-ratio estimation by Pearson-divergence functional minimization

3 November 2023
Alejandro de la Concha
Nicolas Vayatis
Argyris Kalogeratos
ArXivPDFHTML
Abstract

Quantifying the difference between two probability density functions, ppp and qqq, using available data, is a fundamental problem in Statistics and Machine Learning. A usual approach for addressing this problem is the likelihood-ratio estimation (LRE) between ppp and qqq, which -- to our best knowledge -- has been investigated mainly for the offline case. This paper contributes by introducing a new framework for online non-parametric LRE (OLRE) for the setting where pairs of iid observations (xt∼p,xt′∼q)(x_t \sim p, x'_t \sim q)(xt​∼p,xt′​∼q) are observed over time. The non-parametric nature of our approach has the advantage of being agnostic to the forms of ppp and qqq. Moreover, we capitalize on the recent advances in Kernel Methods and functional minimization to develop an estimator that can be efficiently updated online. We provide theoretical guarantees for the performance of the OLRE method along with empirical validation in synthetic experiments.

View on arXiv
Comments on this paper