Abstract

By means of the notion of likelihood ratio, the limit properties of the sequences of arbitrary-dependent continuous random variables are studied, and a kind of strong limit theorems represented by inequalities with random bounds for functions of continuous random variables is established. The Shannon-McMillan theorem is extended to the case of arbitrary continuous information sources. In the proof, an analytic technique, the tools of Laplace transform, and moment generating functions to study the strong limit theorems are applied.

1. Introduction

Let be a sequence of arbitrary continuous real random variables on the probability space with the joint density functionwhere , . Let be another probability measure on , and is a sequence of independent random variables on the probability space with the marginal density functions (), and let In order to indicate the deviation between on the probability measure and , we first introduce the following definitions.Definition 1.1. Let be a sequence of random variables with joint distribution (1.1), and let () be defined by (1.2). Let In statistical terms, is called the likelihood ratio, which is of fundamental importance in the theory of testing the statistical hypotheses (cf. [1, page 388]; [2, page 483]).

The random variableis called asymptotic logarithmic likelihood ratio, relative to the product of marginal distribution of (1.2), of , , where is the natural logarithm, is the sample point. For the sake of brevity, we denote by .

Although is not a proper metric between probability measures, we nevertheless think of it as a measure of “dissimilarity” between their joint distribution and the product of their marginals.

Obviously, , a.s. if and only if are independent.

A stochastic process of fundamental importance in the theory of testing hypotheses is the sequence of likelihood ratio. In view of the above discussion of the asymptotic logarithmic likelihood ratio, it is natural to think of as a measure how far (the random deviation of) is from being independent, how dependent they are. The smaller is, the smaller the deviation is (cf. [35]).

In [3], the strong deviation theorems for discrete random variables were discussed by using the generating function method. Later, the approach of Laplace transform to study the strong limit theorems was first proposed by Liu [4]. Yang [6] further studied the limit properties for Markov chains indexed by a homogeneous tree through the analytic technique. Many comprehensive works may be found in Liu [7]. The purpose of this paper is to establish a kind of strong deviation theorems represented by inequalities with random bounds for functions of arbitrary continuous random variables, by combining the analytic technique with the method of Laplace transform, and to extend the strong deviation theorems to the differential entropy for arbitrary-dependent continuous information sources in more general settings.Definition 1.2. Let be a sequence of nonnegative orel measurable functions defined on , the Laplace transform of random variables on the probability space is defined bywhere denotes the expectation under .

We have the following assumptions in this paper.(1) Assume that there exists such that(2) Assume is a constant, satisfying

In order to prove our main results, we first give a lemma, and it will be shown that it plays a central role in the proofs.Lemma 1.3. Let , be two probability functions on , letthen

Proof. By [8], is a nonnegative martingale and we have by the Doob martingale convergence theorem, there exists an integral random variable , such that , a.s. and (1.9) follows.

2. Main Results

Theorem 2.1. Let , , , be defined as before, and under the assumptions of (1) and (2), let ThenwhereRemark 2.2. Letthen

Proof. Let be an arbitrary real number in , letthen , and letTherefore, is an multivariate probability density function, letBy Lemma 1.3, there exists a set such that so we haveBy (1.3), (2.9), (2.11), and (2.12), we haveTherefore,By (2.13) and (1.4), the property of the superior limitand the inequality (), we haveBy the inequality , which can be found in [9], we haveBy (2.5) and (2.17), we have It is easy to see that () attains its largest value on the interval , and () attains its largest value on the interval , we haveLet in (2.18), by (2.19) and (2.1), we obtainDividing the two sides of (2.21) by , we obtainBy (2.14) and , obviously , hence . Let be the set of rational numbers in the interval , and let then . By (2.22), then we have It is easy to see that is a continuous function with respect to on the interval . For each (), take , such thatBy (2.23), (2.24), and (2.8), we haveSince , (2.2) follows from (2.25).
Let in (2.18), by (2.20) and (2.1), we haveBy (2.14) and , obviously , hence . Let be the set of rational numbers in the interval , and let then . Then we have by (2.26)It is clear that is a continuous function with respect to on the interval . For each (), take , such thatBy (2.27) and (2.28), we haveSince , (2.3) follows from (2.29).
By (2.4), (2.5), and (2.14), if , we haveIf , we haveNoticing that , , (2.6) follows from (2.30) and (2.31).

Corollary 2.3. If , or is a sequence of independent random variables, and under the assumptions of (1) and (2), then

Proof. In this case, and a.s. Hence, (2.32) follows directly from (2.2) and (2.3).

3. An Extension of the Shannon-McMillan Theorem

In order to understand better, we first introduce some definitions in information theory in this section.

Let be a sequence produced by an arbitrary continuous information source on the probability space with the joint density function For the sake of brevity, we denote , and stands for . Letwhere is the sample point, is called the sample entropy or the entropy density of . Also let be another probability measure on with the density functionLet, , and are called the sample relative entropy, the sample relative entropy rate, and the relative entropy, respectively, relative to the reference density function . Indeed, they all are the measure of the deviation between the true joint distribution density function and the reference distribution density function (cf. [10, pages 12, 18]).

A question of importance in information theory is the study of the limit properties of the relative entropy density . Since Shannon's initial work was published (cf. [11]), there has been a great deal of investigation about this question (e.g., cf. [1220]).

In this paper, a class of small deviation theorems (i.e., the strong limit theorems represented by inequalities) is established by using the analytical technique, and an extension of the Shannon-McMillan theorem to the arbitrary-dependent continuous information sources is given. Especially, an approach of applying the tool of Laplace transform to the study of the strong deviation theorems on the differential entropy is proposed.

Let () in (1.5), then we give the following definitions.Definition 3.1. The Laplace transform of is defined by Definition 3.2. The differential entropy for continuous random variables is defined by

In the following theorem, let be independent random variables with respect to , then the reference density function , and let () in Theorem 2.1.

Theorem 3.3. Let , , , be given as above, and under the assumptions of (1) and (2), letThenwhereRemark 3.4. LetthenCorollary 3.5. Let be defined by (3.2). Under the condition of Theorem 3.3, thenwhere is the differential entropy for , andwhere , are denoted by (3.9)–(3.13).Corollary 3.6. If , or are independent random variables, and there exists , such that (2.1) holds, then

Acknowledgments

This research is supported by the National Natural Science Foundation of China (Grants nos. 10671052 and 10571008), the Natural Science Foundation of Beijing (Grant no. 1072004), Funding Project for Academic Human Resources Development in Institutions of Higher Learning Under the Jurisdiction of Beijing Municipality, the Basic Research and Frontier Technology Foundation of Henan (Grant no. 072300410090), and the Natural Science Research Project of Henan (Grant no. 2008B110009). The authors would like to thank the editor and the referees for helpful comments, which helped to improve an earlier version of the paper.