ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.07426
19
7

Optimal Randomized Approximations for Matrix based Renyi's Entropy

16 May 2022
Yuxin Dong
Tieliang Gong
Shujian Yu
Chen Li
ArXivPDFHTML
Abstract

The Matrix-based Renyi's entropy enables us to directly measure information quantities from given data without the costly probability density estimation of underlying distributions, thus has been widely adopted in numerous statistical learning and inference tasks. However, exactly calculating this new information quantity requires access to the eigenspectrum of a semi-positive definite (SPD) matrix AAA which grows linearly with the number of samples nnn, resulting in a O(n3)O(n^3)O(n3) time complexity that is prohibitive for large-scale applications. To address this issue, this paper takes advantage of stochastic trace approximations for matrix-based Renyi's entropy with arbitrary α∈R+\alpha \in R^+α∈R+ orders, lowering the complexity by converting the entropy approximation to a matrix-vector multiplication problem. Specifically, we develop random approximations for integer order α\alphaα cases and polynomial series approximations (Taylor and Chebyshev) for non-integer α\alphaα cases, leading to a O(n2sm)O(n^2sm)O(n2sm) overall time complexity, where s,m≪ns,m \ll ns,m≪n denote the number of vector queries and the polynomial order respectively. We theoretically establish statistical guarantees for all approximation algorithms and give explicit order of s and m with respect to the approximation error ε\varepsilonε, showing optimal convergence rate for both parameters up to a logarithmic factor. Large-scale simulations and real-world applications validate the effectiveness of the developed approximations, demonstrating remarkable speedup with negligible loss in accuracy.

View on arXiv
Comments on this paper