28
0

Equivalence Principle of the PP-value and Mutual Information

Abstract

In this paper, we propose a novel equivalence between probability theory and information theory. For a single random variable, Shannon's self-information, I=logpI=-\log {p}, is an alternative expression of a probability pp. However, for two random variables, no information equivalent to the pp-value has been identified. Here, we prove theorems that demonstrate that mutual information (MI) is equivalent to the pp-value irrespective of prior information about the distribution of the variables. If the maximum entropy principle can be applied, our equivalence theorems allow us to readily compute the pp-value from multidimensional MI. By contrast, in a contingency table of any size with known marginal frequencies, our theorem states that MI asymptotically coincides with the logarithm of the pp-value of Fisher's exact test, divided by the sample size. Accordingly, the theorems enable us to perform a meta-analysis to accurately estimate MI with a low pp-value, thereby calculating informational interdependence that is robust against sample size variation. Thus, our theorems demonstrate the equivalence of the pp-value and MI at every dimension, use the merits of both, and provide fundamental information for integrating probability theory and information theory.

View on arXiv
Comments on this paper