Equivalence Principle of the -value and Mutual Information

In this paper, we propose a novel equivalence between probability theory and information theory. For a single random variable, Shannon's self-information, , is an alternative expression of a probability . However, for two random variables, no information equivalent to the -value has been identified. Here, we prove theorems that demonstrate that mutual information (MI) is equivalent to the -value irrespective of prior information about the distribution of the variables. If the maximum entropy principle can be applied, our equivalence theorems allow us to readily compute the -value from multidimensional MI. By contrast, in a contingency table of any size with known marginal frequencies, our theorem states that MI asymptotically coincides with the logarithm of the -value of Fisher's exact test, divided by the sample size. Accordingly, the theorems enable us to perform a meta-analysis to accurately estimate MI with a low -value, thereby calculating informational interdependence that is robust against sample size variation. Thus, our theorems demonstrate the equivalence of the -value and MI at every dimension, use the merits of both, and provide fundamental information for integrating probability theory and information theory.
View on arXiv