ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.06094
37
5
v1v2 (latest)

Characterizing the Functional Density Power Divergence Class

13 May 2021
Souvik Ray
S. Pal
S. Kar
A. Basu
ArXiv (abs)PDFHTML
Abstract

The density power divergence (DPD) and related measures have produced many useful statistical procedures which provide a good balance between model efficiency on one hand, and outlier stability or robustness on the other. The large number of citations received by the original DPD paper (Basu et al., 1998) and its many demonstrated applications indicate the popularity of these divergences and the related methods of inference. The estimators that are derived from this family of divergences are all M-estimators where the defining ψ\psiψ function is based explicitly on the form of the model density. The success of the minimum divergence estimators based on the density power divergence makes it imperative and meaningful to look for other, similar divergences in the same spirit. The logarithmic density power divergence (Jones et al., 2001), a logarithmic transform of the density power divergence, has also been very successful in producing inference procedures with a high degree of efficiency simultaneously with a high degree of robustness. This further strengthens the motivation to look for statistical divergences that are transforms of the density power divergence, or, alternatively, members of the functional density power divergence class. This note characterizes the functional density power divergence class, and thus identifies the available divergence measures within this construct that may possibly be explored for robust and efficient statistical inference.

View on arXiv
Comments on this paper